Search Results

Search found 32953 results on 1319 pages for 'access secret'.

Page 265/1319 | < Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >

  • Secure Your Wireless Router: 8 Things You Can Do Right Now

    - by Chris Hoffman
    A security researcher recently discovered a backdoor in many D-Link routers, allowing anyone to access the router without knowing the username or password. This isn’t the first router security issue and won’t be the last. To protect yourself, you should ensure that your router is configured securely. This is about more than just enabling Wi-Fi encryption and not hosting an open Wi-Fi network. Disable Remote Access Routers offer a web interface, allowing you to configure them through a browser. The router runs a web server and makes this web page available when you’re on the router’s local network. However, most routers offer a “remote access” feature that allows you to access this web interface from anywhere in the world. Even if you set a username and password, if you have a D-Link router affected by this vulnerability, anyone would be able to log in without any credentials. If you have remote access disabled, you’d be safe from people remotely accessing your router and tampering with it. To do this, open your router’s web interface and look for the “Remote Access,” “Remote Administration,” or “Remote Management” feature. Ensure it’s disabled — it should be disabled by default on most routers, but it’s good to check. Update the Firmware Like our operating systems, web browsers, and every other piece of software we use, router software isn’t perfect. The router’s firmware — essentially the software running on the router — may have security flaws. Router manufacturers may release firmware updates that fix such security holes, although they quickly discontinue support for most routers and move on to the next models. Unfortunately, most routers don’t have an auto-update feature like Windows and our web browsers do — you have to check your router manufacturer’s website for a firmware update and install it manually via the router’s web interface. Check to be sure your router has the latest available firmware installed. Change Default Login Credentials Many routers have default login credentials that are fairly obvious, such as the password “admin”. If someone gained access to your router’s web interface through some sort of vulnerability or just by logging onto your Wi-Fi network, it would be easy to log in and tamper with the router’s settings. To avoid this, change the router’s password to a non-default password that an attacker couldn’t easily guess. Some routers even allow you to change the username you use to log into your router. Lock Down Wi-Fi Access If someone gains access to your Wi-Fi network, they could attempt to tamper with your router — or just do other bad things like snoop on your local file shares or use your connection to downloaded copyrighted content and get you in trouble. Running an open Wi-Fi network can be dangerous. To prevent this, ensure your router’s Wi-Fi is secure. This is pretty simple: Set it to use WPA2 encryption and use a reasonably secure passphrase. Don’t use the weaker WEP encryption or set an obvious passphrase like “password”. Disable UPnP A variety of UPnP flaws have been found in consumer routers. Tens of millions of consumer routers respond to UPnP requests from the Internet, allowing attackers on the Internet to remotely configure your router. Flash applets in your browser could use UPnP to open ports, making your computer more vulnerable. UPnP is fairly insecure for a variety of reasons. To avoid UPnP-based problems, disable UPnP on your router via its web interface. If you use software that needs ports forwarded — such as a BitTorrent client, game server, or communications program — you’ll have to forward ports on your router without relying on UPnP. Log Out of the Router’s Web Interface When You’re Done Configuring It Cross site scripting (XSS) flaws have been found in some routers. A router with such an XSS flaw could be controlled by a malicious web page, allowing the web page to configure settings while you’re logged in. If your router is using its default username and password, it would be easy for the malicious web page to gain access. Even if you changed your router’s password, it would be theoretically possible for a website to use your logged-in session to access your router and modify its settings. To prevent this, just log out of your router when you’re done configuring it — if you can’t do that, you may want to clear your browser cookies. This isn’t something to be too paranoid about, but logging out of your router when you’re done using it is a quick and easy thing to do. Change the Router’s Local IP Address If you’re really paranoid, you may be able to change your router’s local IP address. For example, if its default address is 192.168.0.1, you could change it to 192.168.0.150. If the router itself were vulnerable and some sort of malicious script in your web browser attempted to exploit a cross site scripting vulnerability, accessing known-vulnerable routers at their local IP address and tampering with them, the attack would fail. This step isn’t completely necessary, especially since it wouldn’t protect against local attackers — if someone were on your network or software was running on your PC, they’d be able to determine your router’s IP address and connect to it. Install Third-Party Firmwares If you’re really worried about security, you could also install a third-party firmware such as DD-WRT or OpenWRT. You won’t find obscure back doors added by the router’s manufacturer in these alternative firmwares. Consumer routers are shaping up to be a perfect storm of security problems — they’re not automatically updated with new security patches, they’re connected directly to the Internet, manufacturers quickly stop supporting them, and many consumer routers seem to be full of bad code that leads to UPnP exploits and easy-to-exploit backdoors. It’s smart to take some basic precautions. Image Credit: Nuscreen on Flickr     

    Read the article

  • WMI Rights required to read root\MicrosoftIISv2 in IIS7 with IIS6 compatibility mode

    - by JoeBilly
    I need to manage my IIS7 (Windows Server 2008) remotely with a WMI IIS6 API. So I added the IIS6 WMI Compatibility and IIS6 Metabase Compatibility roles to access the root\MicrosoftIIsv2 namespace. I have a domain account which is not administrator on the remote machine ; with this right, everything is ok. I configured these rights for my domain account to access the root\MicrosoftIIsv2 WMI namespace remotely ; note that these rights work perfectly on a IIS6 and Windows Server 2003 : DCOM : Account in Distributed COM Users Remote & local access to DCOM WMI : Root\CIMV2 (I need access here too) Execute methods, Enable Account, Remote Enable Root\Default (I need access here too) Execute methods, Enable Account, Remote Enable Root\MicrosoftIISv2 Execute methods, Enable Account, Provider Write, Remote Enable IIS Metabase (Metabase Explorer) : LM Full Control (W3SVC inherits these permissions) I tried to give some access on C:\Windows\System32\inetsrv too ; don't know if needed. My issue is : I can't list the IIS WebSites (\root\MicrosoftIISv2:IIsWebServerSetting.Name="W3SVC/*"). I don't get an 'access denied' but nothing is returned. My API and powershell tests can connect and execute queries in the root\MicrosoftIISv2 namespace I can read the IIsComputer class ex: Get-WmiObject IIsComputer -namespace "ROOT\MicrosoftIISv2" -authentication PacketPrivacy | SELECT * I can't read the IIsWebServerSetting, IIsWebServer ... to list the WebSites : the query returns an empty collection ex: Get-WmiObject IIsWebServerSetting -namespace "ROOT\MicrosoftIISv2" -authentication PacketPrivacy | SELECT ServerComment All queries work perfectly if the account is administrator as already said I am using PacketPrivacy authentication FI: I got a Warning Event 5605 with the Administrator right or not, that does not seem to have an impact : The root\MicrosoftIISv2 namespace is marked with the RequiresEncryption flag. Access to this namespace might be denied if the script or application does not have the appropriate authentication level. Change the authentication level to Pkt_Privacy and run the script or application again Ok, I have some more informations, when I use IIS 6 Metabase Explorer with my administrator account I can see the rights are correctly inherited for my non-administrator account. But when I try to connect using my non-administrator account, I can list the LM node, but get an "access denied, failed to get a key's data" when I try to browse the child nodes. I'll check further. I tried to Trace the WMI Activity, and everything seems OK ; this tends to confirm that the problem lies in IIS Rights.

    Read the article

  • network design to segregate public and staff

    - by barb
    My current setup has: a pfsense firewall with 4 NICs and potential for a 5th 1 48 port 3com switch, 1 24 port HP switch, willing to purchase more subnet 1) edge (Windows Server 2003 for vpn through routing and remote access) and subnet 2) LAN with one WS2003 domain controller/dns/wins etc., one WS2008 file server, one WS2003 running Vipre anti-virus and Time Limit Manager which controls client computer use, and about 50 pcs I am looking for a network design for separating clients and staff. I could do two totally isolated subnets, but I'm wondering if there is anything in between so that staff and clients could share some resources such as printers and anti-virus servers, staff could access client resources, but not vice versa. I guess what I'm asking is can you configure subnets and/or vlans like this: 1)edge for vpn 2)services available to all other internal networks 3)staff which can access services and clients 4)clients which can access services but not staff By access/non-access, I mean stronger separation than domain usernames and passwords.

    Read the article

  • Continuing permissions issues - ASP.net, IIS 7, Server 2008 - 0x80070005 (http 500.19) error

    - by Re-Pieper
    I created an ASP.net MVC developed web application and I am trying to set up IIS. The Error: Http error 500.19, error code 0x80070005, Cannot read configuration file due to insufficient permissions, config file: C:\inetpub\wwwroot\BudgetManagerMain\BudgetManager\web.config If I set the AppPool to use 'administrator' i have no problems and can access the site just fine. If i set to NETWORK SERVICE (or anything else including self-created admin or non-admin user accounts), i get the above error. Things I have tried: identity for Application pool named 'test' is 'NetworkService' Set full access privs for wwwroot and all children files/folders verified effective permissions and NETWORK SERVICE has full access. Authentication on my site is set for anonymous and running under Application Pool Identity I do not have any physical path credentials set on the website confirmed website is set to run under the application pool named 'test' using Process Monitor, here is a summary of what i found on the ACCESS DENIED event EVENT TAB: Class: File System Operation: CreateFile Result: Access Denied Path: ..\web.config Desired Access: Generic Read Disposition: Open Options: Sybnchronous IO Non-Alert, Non-Directory file Attributes: N ShareMode: Read AllocaitonSize: n/a PROCESS TAB ...lots of stuff that seems irrelevant User: NT AUTHORITY\NETWORK SERVICE

    Read the article

  • How to make a 100% UNIQUE dvd? [closed]

    - by sajawikio
    Is it possible to make a DVD which would be next to impossible to replicate exactly even with special equipment? To be clear, not in the sense of making the data itself resistant to piracy - nothing like that. More like as an analogy to antiques - a reproduction of a furniture would be able to be spotted to a trained eye, that it is not the original. Is it possible to do this for a dvd? Like, say a dvd is copied, even by someone who is trying to use even special equipment and is hypothetically dying to copy the dvd exactly for whatever reason - even such copied dvd would be detectable that it is not exactly the original dvd somehow (as if it were a reproduction of an original antique, but not the original), and would be almost or even preferably impossible to actually copy 100% exact the dvd ever again. Just some ideas below on the sort of thing i might go about doing to do this, but really am not sure how or what programs, media, hardware, etc. would do the trick Not sure what would do the trick -- but for instance do there exist any blank dvd's that already come pre-recorded with some sort of serial number or bar code, or other metadata, or an encrypted hash, or something like that? Maybe any blank dvd will do but i should get a special software to extract hardcoded metadata? If so which software? Or special hardware even maybe? Such dvd which the "secret" can be: Well, to know what the "secret" is and if it is present on the disk, it probably should be readable by some software or maybe a particular hardware (i guess preferably only if some sort of key is known and input into the software, even better, only then such secret data on disk can be read, otherwise nothing shows up and it looks like just a regular disk with no secret on it), and Would be impossible to actually replicate, especially not with regular burning hardware and preferably not at all. Other idea: Is there any special software that can direct the write head of laser to physically "mar" the dvd in such a way that, when played in dvd player, makes a particular visual pattern or something like that, also say the mar itself shows up as faint scratch on disk, but would be impossible for someone to do themselves exactly? EDIT: Also to clarify suppose the dvd contains video and music that should be playable on dvd players, maybe a menu too (i.e not a dvd containing software), and also to clarify the question is about how to make dvd 100% unique, not how to make the actual content of dvd protected from "piracy".

    Read the article

  • Asterisk Outgoing CDR Logging To Mysql

    - by user3295551
    Trying to utilize the cdr logging (to mysql) using custom fields. The problem I am facing is only when an outbound call is placed, during inbound calls the custom field I am able to log no problem. The reason I am having an issue is because the custom cdr field I need is a unique value for each user on the system. sip.conf ... ... [sales_department](!) type=friend host=dynamic context=SalesAgents disallow=all allow=ulaw allow=alaw qualify=yes qualifyfreq=30 ;; company sales agents: [11](sales_agent) secret=xxxxxx callerid="<...>" [12](sales_agent) secret=xxxxxx callerid="<...>" [13](sales_agent) secret=xxxxxx callerid="<...>" [14](sales_agent) secret=xxxxxx callerid="<...>" extensions.conf [SalesAgents] include => Services ; Outbound calls exten=>_1NXXNXXXXXX,1,Dial(SIP/${EXTEN}@myprovider) ; Inbound calls exten=>100,1,NoOp() same => n,Set(CDR(agent_id)=11) same => n,CELGenUserEvent(Custom Event) same => n,Dial(${11_1},25) same => n,GotoIf($["${DIALSTATUS}" = "BUSY"]?busy:unavail) same => n(unavail),VoiceMail(11@asterisk) same => n,Hangup() same => n(busy),VoiceMail(11@asterisk) same => n,Hangup() exten=>101,1,NoOp() same => n,Set(CDR(agent_id)=12) same => n,CELGenUserEvent(Custom Event) same => n,Dial(${12_1},25) same => n,GotoIf($["${DIALSTATUS}" = "BUSY"]?busy:unavail) same => n(unavail),VoiceMail(12@asterisk) same => n,Hangup() same => n(busy),VoiceMail(12@asterisk) same => n,Hangup() ... ... For the inbound section of the dialplan in the above example I am able to insert the custom cdr field (agent_id). But above it you can see for the Oubound section of the dialplan I have been stumped on how I would be able to tell the dialplan which agent_id is making the outbound call. My Question: how to take the agent_id=[11] & agent_id=[12] and agent_id=[13] and agent_id=[14] etc and use that as a custom field for cdr on outbound calls?

    Read the article

  • Why is group write permission ignored in Ubuntu?

    - by NorthPole
    I want my user to have full access to the local Apache root folder, and I also want the Apache user to have full access to the same folder. What I did was create a new group called DevGroup and I added my user and www-data there. Also I changed the permissions to 770 to allow full group access. But now it won't allow me or the Apache user any kind of access to the folder. Here is what I get with ls: drwxrwx--- 12 root DevGroup 4096 Sep 27 17:34 testFolder Which seems perfect but when I try as a user to access the file I get this: var/www$ ls testFolder/ ls: cannot open directory testFolder/: Permission denied Also when I try to access a page in the folder from a browser: [Thu Sep 27 17:47:16 2012] [error] [client 127.0.0.1] PHP Fatal error: Unknown: Failed opening required '/var/www/testFolder/foo.php' (include_path='.:/usr/share/php:/usr/share/pear') in Unknown on line 0 What's the problem and how can I fix it?

    Read the article

  • logrotate> removing delaycompress function: should I compress the last log myself?

    - by user120006
    I'm removing the delaycompress function from my logrotating script. Before running logrotate again, should I compress the last log myself ? This is the actual situation: -rw-r----- 1 root adm 4,7M 5 mag 18:38 access.log -rw-r----- 1 root adm 5,2M 29 apr 05:44 access.log.1 -rw-r----- 1 root adm 473K 22 apr 05:45 access.log.2.gz -rw-r----- 1 root adm 605K 15 apr 05:44 access.log.3.gz -rw-r----- 1 root adm 588K 8 apr 05:44 access.log.4.gz The question is: Should I compress "access.log.1" and THEN launch logrotate ? Or logrotate will understand I removed the "delaycompress" option and fix things himself ?

    Read the article

  • Allowing users in from an IP address without certificate client authentication

    - by John
    I need to allow access to my site without SSL certificates from my office network and with SSL certificates outside. Here is my configuration: <Directory /srv/www> AllowOverride All Order deny,allow Deny from all # office network static IP Allow from xxx.xxx.xxx.xxx SSLVerifyClient require SSLOptions +FakeBasicAuth AuthName "My secure area" AuthType Basic AuthUserFile /etc/httpd/ssl/index Require valid-user Satisfy Any </Directory> When I'm inside network and have certificate - I can access. When I'm inside network and haven't certificate - I can't access, it requires certificate. When I'm outside network and have certificate - I can't access, it shows me basic login screen When I'm outside network and haven't certificate - I can't access, it shows me basic login screen and following configuration works perfectly <Directory /srv/www> AllowOverride All Order deny,allow Deny from all Allow from xxx.xxx.xxx.xxx AuthUserFile /srv/www/htpasswd AuthName "Restricted Access" AuthType Basic Require valid-user Satisfy Any </Directory>

    Read the article

  • group write permission ignored in ubuntu

    - by NorthPole
    Its probably my stupidy here but i'm stuck on this and would appreciate the help. I want my user to have full access to the local apache root folder, and i also want the apache to have full access to the same folder. What i did was create a new group called DevGroup and i added my username and www-data there. also i changed the permissions to 770 to allow full group access but now it wont allow me or the apache any kind of access to the folder. here is what i get with ls drwxrwx--- 12 root DevGroup 4096 Sep 27 17:34 testFolder which seems perfect but when i try as a user to access the file i get this var/www$ ls testFolder/ ls: cannot open directory testFolder/: Permission denied also when i try to access the a page in the folder from browser [Thu Sep 27 17:47:16 2012] [error] [client 127.0.0.1] PHP Fatal error: Unknown: Failed opening required '/var/www/testFolder/foo.php' (include_path='.:/usr/share/php:/usr/share/pear') in Unknown on line 0

    Read the article

  • How to generate Serial Keys? [closed]

    - by vincent mathew
    Which software can I use to generate Product keys if I have the GroupId, KeyId, Secret and Hash for the generation? Edit: I had seen a post which generated Product Keys using this information. [Additional Key Details/Activation Decryption*: GroupId = 86f 2159 KeyId = ed46 60742 Secret = e0cdc320ba048 3954789545910344 Hash = 5f 95 ] So I was wondering if there is any software which could generate keys using this information? Thanks.

    Read the article

  • SQL SERVER – Microsoft SQL Server Migration Assistant V6.0 Released

    - by Pinal Dave
    Every company makes a different decision about the database when they start, but as they move forward they mature and make the decision which is based on their experience and best interest of the organization. Similarly, quite a many organizations make different decisions on database, like Sybase, MySQL, Oracle or Access and as time passes by they learn that now they want to move to a different platform. Microsoft makes it easy for SQL Server professional by releasing various Migration Assistant tools. Last week, Microsoft released Microsoft SQL Server Migration Assistant v6.0. Here are different tools released earlier last week to migrate various product to SQL Server. Microsoft SQL Server Migration Assistant v6.0 for Sybase SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from Sybase Adaptive Server Enterprise (ASE) to SQL Server and Azure SQL DB. SSMA automates all aspects of migration including migration assessment analysis, schema and SQL statement conversion, data migration as well as migration testing. Microsoft SQL Server Migration Assistant v6.0 for MySQL SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from MySQL to SQL Server and Azure SQL DB. SSMA automates all aspects of migration including migration assessment analysis, schema and SQL statement conversion, data migration as well as migration testing. Microsoft SQL Server Migration Assistant v6.0 for Oracle SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from Oracle to SQL Server and Azure SQL DB. SSMA automates all aspects of migration including migration assessment analysis, schema and SQL statement conversion, data migration as well as migration testing. Microsoft SQL Server Migration Assistant v6.0 for Access SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from Access to SQL Server. SSMA for Access automates conversion of Microsoft Access database objects to SQL Server database objects, loads the objects into SQL Server and Azure SQL DB, and then migrates data from Microsoft Access to SQL Server and Azure SQL DB. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SQL Migration

    Read the article

  • Token based Authentication for WCF HTTP/REST Services: Authorization

    - by Your DisplayName here!
    In the previous post I showed how token based authentication can be implemented for WCF HTTP based services. Authentication is the process of finding out who the user is – this includes anonymous users. Then it is up to the service to decide under which circumstances the client has access to the service as a whole or individual operations. This is called authorization. By default – my framework does not allow anonymous users and will deny access right in the service authorization manager. You can however turn anonymous access on – that means technically, that instead of denying access, an anonymous principal is placed on Thread.CurrentPrincipal. You can flip that switch in the configuration class that you can pass into the service host/factory. var configuration = new WebTokenWebServiceHostConfiguration {     AllowAnonymousAccess = true }; But this is not enough, in addition you also need to decorate the individual operations to allow anonymous access as well, e.g.: [AllowAnonymousAccess] public string GetInfo() {     ... } Inside these operations you might have an authenticated or an anonymous principal on Thread.CurrentPrincipal, and it is up to your code to decide what to do. Side note: Being a security guy, I like this opt-in approach to anonymous access much better that all those opt-out approaches out there (like the Authorize attribute – or this.). Claims-based Authorization Since there is a ClaimsPrincipal available, you can use the standard WIF claims authorization manager infrastructure – either declaratively via ClaimsPrincipalPermission or programmatically (see also here). [ClaimsPrincipalPermission(SecurityAction.Demand,     Resource = "Claims",     Operation = "View")] public ViewClaims GetClientIdentity() {     return new ServiceLogic().GetClaims(); }   In addition you can also turn off per-request authorization (see here for background) via the config and just use the “domain specific” instrumentation. While the code is not 100% done – you can download the current solution here. HTH (Wanna learn more about federation, WIF, claims, tokens etc.? Click here.)

    Read the article

  • Web API, JavaScript, Chrome &amp; Cross-Origin Resource Sharing

    - by Brian Lanham
    The team spent much of the week working through this issues related to Chrome running on Windows 8 consuming cross-origin resources using Web API.  We thought it was resolved on day 2 but it resurfaced the next day.  We definitely resolved it today though.  I believe I do not fully understand the situation but I am going to explain what I know in an effort to help you avoid and/or resolve a similar issue. References We referenced many sources during our trial-and-error troubleshooting.  These are the links we reference in order of applicability to the solution: Zoiner Tejada JavaScript and other material from -> http://www.devproconnections.com/content1/topic/microsoft-azure-cors-141869/catpath/windows-azure-platform2/page/3 WebDAV Where I learned about “Accept” –>  http://www-jo.se/f.pfleger/cors-and-iis? IT Hit Tells about NOT using ‘*’ –> http://www.webdavsystem.com/ajax/programming/cross_origin_requests Carlos Figueira Sample back-end code (newer) –> http://code.msdn.microsoft.com/windowsdesktop/Implementing-CORS-support-a677ab5d (older version) –> http://code.msdn.microsoft.com/CORS-support-in-ASPNET-Web-01e9980a   Background As a measure of protection, Web designers (W3C) and implementers (Google, Microsoft, Mozilla) made it so that a request, especially a JSON request (but really any URL), sent from one domain to another will only work if the requestee “knows” about the requester and allows requests from it. So, for example, if you write a ASP.NET MVC Web API service and try to consume it from multiple apps, the browsers used may (will?) indicate that you are not allowed by showing an “Access-Control-Allow-Origin” error indicating the requester is not allowed to make requests. Internet Explorer (big surprise) is the odd-hair-colored step-child in this mix. It seems that running locally at least IE allows this for development purposes.  Chrome and Firefox do not.  In fact, Chrome is quite restrictive.  Notice the images below. IE shows data (a tabular view with one row for each day of a week) while Chrome does not (trust me, neither does Firefox).  Further, the Chrome developer console shows an XmlHttpRequest (XHR) error. Screen captures from IE (left) and Chrome (right). Note that Chrome does not display data and the console shows an XHR error. Why does this happen? The Web browser submits these requests and processes the responses and each browser is different. Okay, so, IE is probably the only one that’s truly different.  However, Chrome has a specific process of performing a “pre-flight” check to make sure the service can respond to an “Access-Control-Allow-Origin” or Cross-Origin Resource Sharing (CORS) request.  So basically, the sequence is, if I understand correctly:  1)Page Loads –> 2)JavaScript Request Processed by Browser –> 3)Browsers Prepares to Submit Request –> 4)[Chrome] Browser Submits Pre-Flight Request –> 5)Server Responds with HTTP 200 –> 6)Browser Submits Request –> 7)Server Responds with Data –> 8)Page Shows Data This situation occurs for both GET and POST methods.  Typically, GET methods are called with query string parameters so there is no data posted.  Instead, the requesting domain needs to be permitted to request data but generally nothing more is required.  POSTs on the other hand send form data.  Therefore, more configuration is required (you’ll see the configuration below).  AJAX requests are not friendly with this (POSTs) either because they don’t post in a form. How to fix it. The team went through many iterations of self-hair removal and we think we finally have a working solution.  The trial-and-error approach eventually worked and we referenced many sources for the information.  I indicate those references above.  There are basically three (3) tasks needed to make this work. Assumptions: You are using Visual Studio, Web API, JavaScript, and have Cross-Origin Resource Sharing, and several browsers. 1. Configure the client Joel Cochran centralized our “cors-oriented” JavaScript (from here). There are two calls including one for GET and one for POST function(url, data, callback) {             console.log(data);             $.support.cors = true;             var jqxhr = $.post(url, data, callback, "json")                 .error(function(jqXhHR, status, errorThrown) {                     if ($.browser.msie && window.XDomainRequest) {                         var xdr = new XDomainRequest();                         xdr.open("post", url);                         xdr.onload = function () {                             if (callback) {                                 callback(JSON.parse(this.responseText), 'success');                             }                         };                         xdr.send(data);                     } else {                         console.log(">" + jqXhHR.status);                         alert("corsAjax.post error: " + status + ", " + errorThrown);                     }                 });         }; The GET CORS JavaScript function (credit to Zoiner Tejada) function(url, callback) {             $.support.cors = true;             var jqxhr = $.get(url, null, callback, "json")                 .error(function(jqXhHR, status, errorThrown) {                     if ($.browser.msie && window.XDomainRequest) {                         var xdr = new XDomainRequest();                         xdr.open("get", url);                         xdr.onload = function () {                             if (callback) {                                 callback(JSON.parse(this.responseText), 'success');                             }                         };                         xdr.send();                     } else {                         alert("CORS is not supported in this browser or from this origin.");                     }                 });         }; The POST CORS JavaScript function (credit to Zoiner Tejada) Now you need to call these functions to get and post your data (instead of, say, using $.Ajax). Here is a GET example: corsAjax.get(url, function(data) { if (data !== null && data.length !== undefined) { // do something with data } }); And here is a POST example: corsAjax.post(url, item); Simple…except…you’re not done yet. 2. Change Web API Controllers to Allow CORS There are actually two steps here.  Do you remember above when we mentioned the “pre-flight” check?  Chrome actually asks the server if it is allowed to ask it for cross-origin resource sharing access.  So you need to let the server know it’s okay.  This is a two-part activity.  a) Add the appropriate response header Access-Control-Allow-Origin, and b) permit the API functions to respond to various methods including GET, POST, and OPTIONS.  OPTIONS is the method that Chrome and other browsers use to ask the server if it can ask about permissions.  Here is an example of a Web API controller thus decorated: NOTE: You’ll see a lot of references to using “*” in the header value.  For security reasons, Chrome does NOT recognize this is valid. [HttpHeader("Access-Control-Allow-Origin", "http://localhost:51234")] [HttpHeader("Access-Control-Allow-Credentials", "true")] [HttpHeader("Access-Control-Allow-Methods", "ACCEPT, PROPFIND, PROPPATCH, COPY, MOVE, DELETE, MKCOL, LOCK, UNLOCK, PUT, GETLIB, VERSION-CONTROL, CHECKIN, CHECKOUT, UNCHECKOUT, REPORT, UPDATE, CANCELUPLOAD, HEAD, OPTIONS, GET, POST")] [HttpHeader("Access-Control-Allow-Headers", "Accept, Overwrite, Destination, Content-Type, Depth, User-Agent, X-File-Size, X-Requested-With, If-Modified-Since, X-File-Name, Cache-Control")] [HttpHeader("Access-Control-Max-Age", "3600")] public abstract class BaseApiController : ApiController {     [HttpGet]     [HttpOptions]     public IEnumerable<foo> GetFooItems(int id)     {         return foo.AsEnumerable();     }     [HttpPost]     [HttpOptions]     public void UpdateFooItem(FooItem fooItem)     {         // NOTE: The fooItem object may or may not         // (probably NOT) be set with actual data.         // If not, you need to extract the data from         // the posted form manually.         if (fooItem.Id == 0) // However you check for default...         {             // We use NewtonSoft.Json.             string jsonString = context.Request.Form.GetValues(0)[0].ToString();             Newtonsoft.Json.JsonSerializer js = new Newtonsoft.Json.JsonSerializer();             fooItem = js.Deserialize<FooItem>(new Newtonsoft.Json.JsonTextReader(new System.IO.StringReader(jsonString)));         }         // Update the set fooItem object.     } } Please note a few specific additions here: * The header attributes at the class level are required.  Note all of those methods and headers need to be specified but we find it works this way so we aren’t touching it. * Web API will actually deserialize the posted data into the object parameter of the called method on occasion but so far we don’t know why it does and doesn’t. * [HttpOptions] is, again, required for the pre-flight check. * The “Access-Control-Allow-Origin” response header should NOT NOT NOT contain an ‘*’. 3. Headers and Methods and Such We had most of this code in place but found that Chrome and Firefox still did not render the data.  Interestingly enough, Fiddler showed that the GET calls succeeded and the JSON data is returned properly.  We learned that among the headers set at the class level, we needed to add “ACCEPT”.  Note that I accidentally added it to methods and to headers.  Adding it to methods worked but I don’t know why.  We added it to headers also for good measure. [HttpHeader("Access-Control-Allow-Methods", "ACCEPT, PROPFIND, PROPPA... [HttpHeader("Access-Control-Allow-Headers", "Accept, Overwrite, Destin... Next Steps That should do it.  If it doesn’t let us know.  What to do next?  * Don’t hardcode the allowed domains.  Note that port numbers and other domain name specifics will cause problems and must be specified.  If this changes do you really want to deploy updated software?  Consider Miguel Figueira’s approach in the following link to writing a custom HttpHeaderAttribute class that allows you to specify the domain names and then you can do it dynamically.  There are, of course, other ways to do it dynamically but this is a clean approach. http://code.msdn.microsoft.com/windowsdesktop/Implementing-CORS-support-a677ab5d

    Read the article

  • Posting to Facebook Page using C# SDK from "offline" app

    - by James Crowley
    If you want to post to a facebook page using the Facebook Graph API and the Facebook C# SDK, from an "offline" app, there's a few steps you should be aware of. First, you need to get an access token that your windows service or app can permanently use. You can get this by visiting the following url (all on one line), replacing [ApiKey] with your applications Facebook API key. http://www.facebook.com/login.php?api_key=[ApiKey]&connect_display=popup&v=1.0 &next=http://www.facebook.com/connect/login_success.html&cancel_url=http://www.facebook.com/connect/login_failure.html &fbconnect=true&return_session=true&req_perms=publish_stream,offline_access,manage_pages&return_session=1 &sdk=joey&session_version=3 In the parameters of the URL you get redirected to, this will give you an access key. Note however, that this only gives you an access key to post to your own profile page. Next, you need to get a separate access key to post to the specific page you want to access. To do this, go to https://graph.facebook.com/[YourUserId]/accounts?access_token=[AccessTokenFromAbove] You can find your user id in the URL when you click on your profile image. On this page, you will then see a list of page IDs and corresponding access tokens for each facebook page. Using the appropriate pair, you can then use code like this: var app = new Facebook.FacebookApp(_accessToken); var parameters = new Dictionary { { "message" , promotionInfo.TagLine }, { "name" , promotionInfo.Title }, { "description" , promotionInfo.Description }, { "picture", promotionInfo.ImageUrl.ToString() }, { "caption" , promotionInfo.TargetUrl.Host }, { "link" , promotionInfo.TargetUrl.ToString() }, { "type" , "link" }, }; app.Post(_targetId + "/feed", parameters); And you're done!

    Read the article

  • The Mystery of the Vanishing Disk Space

    - by Oddthinking
    My disk space is dwindling by about 2GB a day! I only have a few more days before I run out of space. $ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda4 143G 126G 11G 93% / udev 491M 4.0K 491M 1% /dev tmpfs 200M 696K 199M 1% /run none 5.0M 0 5.0M 0% /run/lock none 499M 144K 499M 1% /run/shm /dev/sda2 1.9G 580M 1.2G 33% /tmp /dev/sda1 92M 29M 58M 33% /boot I have been searching for the biggest directories/log files, deleting and compressing. But I am still losing the war. Finally, I realised I have a big misunderstanding: julian@server1:~$ sudo du -h / | tail -n 1 16G / All of my files in / only add up to 16 GB. That leaves 110 GB unaccounted for! Clearly I have a misunderstanding: I thought the '/dev/sda4' line represented all the files visible from '/'. What should I be reading to understand where the other storage has gone? More details: I have an Ubuntu 11.10 server, that was set-up by data-center staff. It is running my own code (which is fairly prolific with log files, but otherwise doesn't store much stuff on the drive) duplicity for backups (which tends to store a lot of signature files) various other standard services, like Apache, nagios, etc. They are very lightly used. It has been up for about 4 months without a reboot. I lied about the du output (simplified it for effect). It also complained about not being able to access GVFS and the du processes's own resources. I believe they are irrelevant: . du: cannot access `/home/julian/.gvfs': Permission denied du: cannot access `/proc/10841/task/10841/fd/4': No such file or directory du: cannot access `/proc/10841/task/10841/fdinfo/4': No such file or directory du: cannot access `/proc/10841/fd/4': No such file or directory du: cannot access `/proc/10841/fdinfo/4': No such file or directory

    Read the article

  • Development teams do not scale

    - by Matt Watson
    Recently I have been thinking about how development teams don't scale very well. The bigger a team and the product get, the more time the team spends fixing software bugs. This means they spend more time doing troubleshooting and debugging as the grow. The problem is that since developers don't typically have access to production servers, there is a bottleneck in the process when doing production troubleshooting.For a team that has 10 developers, I would guess than 0-2 of them have access to production servers. If that team grows to 20 people, it is probably the same 0-2 people that have production access still. This means that those 2 key people are a bottleneck and the team does not scale correctly as you add more resources. All those new developers want is to help track down and fix software bugs, but they don't have the visibility to do it. So they end up being less productive and frustrated because they really want to fix the problems. The people who do have production access end up spending too much of their time doing troubleshooting instead of working on new projects.The solution is to remove the bottlenecks and get those people working on more important tasks. Stackify can solve this problem by giving all the developers read only access to production servers. This allows them to access the information they need to do troubleshooting on their own.

    Read the article

  • Ditch cPanel / WHM in favour of manual seup

    - by BWRic
    We currently use cPanel / WHM on a reseller account but are looking at getting a dedicated server. My first thought was to duplicate this set up on the dedicated box to allow us to quickly create new accounts. I'll be a managed server so they'll have set up the LAMP stack. I'm curious if I actually need cPanel and WHM. We don't use many of the features from cPanel / WHM, just creating accounts and databases, clients do not have FTP access. I'm no sys admin and come from a Windows / GUI background but have some knowledge in setting up development servers. WHM: Creating accounts I presume this sets up the Apache virtual host, FTP access and DNS settings. I've some knowledge of editing the Apache files to create virtual hosts. Am I correct in thinking as long as the DNS is pointing to the server IP and the virtual host is configured the server can serve the (php) pages? I'm not sure I need per site FTP access as only we will have access so I could have a server wide/htdocs only access to view all the site. The company who supply the dedicated hosts would also provide the own DNS management tool so I'm not need to cPanel one. MySQL: Creating users and databases We use cPanel to create the MySQL users and databases. As it's a dedicated box and I can have root access I think this could be replaced by SQLyog for db management and phpMyAdmin for user management. Do you I need cPanel or can I get by editing a few text files for creating the accounts, then use the MySQL tools for databases? Or am I missing something major with how the sites are configured?

    Read the article

  • OAuth gives me 401 error

    - by Radek
    I am trying to get the access key but I cannot make it work. `request_token.get_access_token is giving me 401 Unauthorized (OAuth::Unauthorized) error. I copy the authorize_url into my browser, allow the application, I receive some kind of PIN from twitter but after hitting enter in my script I always get 401 error. I did some search and I found this helped access_token = request_token.get_access_token(:oauth_verifier => params[:oauth_verifier]) but it is giving me undefined local variable or methodparams' for main:Object (NameError)` the ruby script is like ( I was following this tutorial ) gem 'oauth' require 'oauth/consumer' consumer_key = 'your key' consumer_secret ='your secret' consumer=OAuth::Consumer.new "consumer_key", "consumer_secret", {:site=>"http://twitter.com"} #{:site=>"https://agree2.com"} request_token = consumer.get_request_token puts request_token.token puts request_token.secret puts request_token.authorize_url puts "Hit enter when you have completed authorization." STDIN.gets access_token = request_token.get_access_token #access_token = request_token.get_access_token(:oauth_verifier => params[:oauth_verifier]) puts access_token.token puts access_token.secret puts puts access_token.inspect

    Read the article

  • Interop: HmacSHA1 in Java and dotNet

    - by wilth
    Hello, In an app we are calculating a SHA1Hmac in java using the following: SecretKey key = new SecretKeySpec(secret, "HmacSHA1"); Mac m = Mac.getInstance("HmacSHA1"); m.init(secret); byte[] hmac = m.doFinal(data); And later, the hmac is verified in C# - on a SmartCard - using: HMACSHA1 hmacSha = new HMACSHA1(secret); hmacSha.Initialize(); byte[] hmac = hmacSha.ComputeHash(data); However, the result is not the same. Did I overlook something important? The inputs seem to be the same. Here some sample inputs: Data: 546573746461746131323341fa3c35 Key: 6d795472616e73616374696f6e536563726574 Result Java: 37dbde318b5e88acbd846775e38b08fe4d15dac6 Result C#: dd626b0be6ae78b09352a0e39f4d0e30bb3f8eb9 I wouldn't mind to implement my own hmacsha1 on both platforms, but using what already exists.... Thanks!

    Read the article

  • Best Pratice to Implement Secure Remember Me

    - by Yan Cheng CHEOK
    Sometimes, I came across certain web development framework which doesn't provide authentication feature as in Authenication ASP.NET I was wondering what is the security measure needs to be considered, when implementing "Remember Me" login feature, by hand coding? Here are the things I usually did. 1) Store the user name in cookie. The user name are not encrypted. 2) Store a secret key in cookie. The secret key is generated using one way function based on user name. The server will verify secret key against user name, to ensure this user name is not being changed. 3) Use HttpOnly in cookie. http://www.codinghorror.com/blog/2008/08/protecting-your-cookies-httponly.html Any things else I could miss out, which could possible lead a security hole.

    Read the article

  • Can't get gitosis and ssh to play nice on cygwin

    - by Noel Kennedy
    I have followed this guide to setting up gitosis on a windows 2003 server via cygwin. I have now got to a point where it largely works. I can clone, pull and push. The problem I am having is that I think I have not got the ssh bit right at all. When I connect via msysgit from machines and accounts where I have not created or uploaded ssh keys it works. Every time I clone, pull or push I get a password challenge for the 'git' user running on the server but basically I can execute git commands. When I connect with users with an ssh key in the ~/.ssh folder, I don't get the password challange and instead I get a permissions failure: DEBUG:gitosis.serve.main:Got command "git-upload-pack '/cris.git'" DEBUG:gitosis.access.haveAccess:Access check for 'teamcity@hhit24808' as 'writable' on 'cris.git'... DEBUG:gitosis.access.haveAccess:Stripping .git suffix from 'cris.git', new value 'cris' DEBUG:gitosis.access.haveAccess:Access check for 'teamcity@hhit24808' as 'writeable' on 'cris.git'... DEBUG:gitosis.access.haveAccess:Stripping .git suffix from 'cris.git', new value 'cris' DEBUG:gitosis.access.haveAccess:Access check for 'teamcity@hhit24808' as 'readonly' on 'cris.git'... DEBUG:gitosis.access.haveAccess:Stripping .git suffix from 'cris.git', new value 'cris' ERROR:gitosis.serve.main:Repository read access denied fatal: The remote end hung up unexpectedly I have uploaded the public rsa key into the key_dir folder. Here is my conf file: [gitosis] loglevel = DEBUG [group gitosis-admin] writable = gitosis-admin members = myemail@mydomain [group cris-developers] members = myemail@mydomain TeamCity@HHIT24808 writable = cris If it matters, I have generated a key without a passphrase as I believe this is necessary to enable ssh for automated scripts. When I use keys with a passphrase, I get challanged for the phrase but then get the same permissions problem. I have tried 'writable' and 'writeable' for permissions. Help!! Update 1: When I try to clone a non-existant repo, I get the same error message, co-incidence? Update 2: Wierd, I've got one machine and one login working. It seems to be something to do with the syntax for addressing git over ssh. This now works on one machine for one login: git clone git@servername:cris.git The same command fails for a user on another machine without an uploaded ssh key. But this command works (after being challanged for git@servername's password) git clone git@servername:/home/git/repositories/cris.git neither command works on a 2nd login whose ssh key has been uploaded

    Read the article

  • bit.ly API thumbnails

    - by user160222
    The bit.ly info API returns URLs for thumbnail images. When I attempt to access these images, I receive an access denied error. How do you access the thumbnail images generated by bit.ly? EDIT: I am testing the image URL by taking the URL provided by the info API and using the browser to access. Bit.ly returns an Access Denied error code. Do I need to somehow provide the login and apikey?

    Read the article

  • How do I (or should I?) access the service layer from a SiteMesh template (views/layouts/main.gsp) in Grails?

    - by knorv
    I need to create a toplist in the page footer on a site that I'm building. The footer is created in the default SiteMesh layout template (views/layouts/main.gsp). In order to create the toplist access to the database is needed, so I've encapsulated all logic needed for the toplist creation in a service class (services/FooService). Please note that while services are usually accessed from the controller layer, in this case the default layout template (views/layouts/main.gsp) is not generated from a controller. Can the layout view (views/layouts/main.gsp) access a service class? How? Is this the correct design decision? If not, what is a better encapsulation and how do I interact with said encapsulation from the layout view (views/layouts/main.gsp)?

    Read the article

< Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >