Search Results

Search found 11820 results on 473 pages for 'online accounts'.

Page 65/473 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • Does Outlook continue to auto-discover account settings for already configured accounts? Can it be prevented?

    - by Oliver Salzburg
    fail2ban just locked me out of our website because something from my desktop was hammering port 443 on the server (which is not in use). I saw my IP also requesting "GET /autodiscover/autodiscover.xml HTTP/1.1", so I assume that's what's going on on port 443 as well. But I only have 1 email account configured in Outlook and it's working just fine. The account is for the address [email protected] and said server will answer for example.com, but that server is not our MX and it is also not configured as an Exchange server in my mail account. So, why is Outlook still trying to retrieve those auto-configuration settings?

    Read the article

  • Can you share offline files cache with two user accounts?

    - by Joel Coehoorn
    I have a new laptop that I use for both home and work. It runs windows 7 ultimate, and is joined to the domain at work. It is okay to use this laptop for both work and personal activities, and I even have an account set up on the local machine in addition to the work domain account specifically for this to help keep the two separate. At home, I have a file server that I use to share files and printers with my wife's laptop, this new laptop, and my old desktop which will now become the family machine. My mp3 library is on there, among other things. What I want to do is use the windows Offline Files feature to keep a synced copy of my music library on the laptop. That part is easy. What's tricky is that I want to share this offline cache between both the local account on the laptop and my work domain account. I could do them both separately, but then I have two copies of a very large music library stored locally. This also means twice the sync burden, when the domain account is rarely connected to the file share. I really want to be able to sync from the local machine account only, and have the domain account be able to use the synced files. I know where the offline file cache is kept (\Windows\CSC) and I can find the cached files (not encrypted), but permissions on the cache are setup weird, and so using that cache directly is not trivial. Any ideas appreciated.

    Read the article

  • Can't log in via SSH to any accounts set to use /bin/bash as a default shell

    - by Gui Ambros
    I'm trying to install bash as the default shell on a ARM Linux running on an embedded device (Synology DS212+ NAS). But there's something really wrong, and I can't figure out what it is. Symptoms: 1) Root has /bin/bash as default shell, and can log in normally via SSH: $ grep root /etc/passwd root:x:0:0:root:/root:/bin/bash $ ssh root@NAS root@NAS's password: Last login: Sun Dec 16 14:06:56 2012 from desktop # 2) joeuser has /bin/bash as default shell, and receives "Permission denied" when trying to log in via SSH: $ grep joeuser /etc/passwd joeuser:x:1029:100:Joe User:/home/joeuser:/bin/bash $ ssh joeuser@localhost joeuser@NAS's password: Last login: Sun Dec 16 14:07:22 2012 from desktop Permission denied, please try again. Connection to localhost closed. 3) changing joeuser's shell back to /bin/sh: $ grep joeuser /etc/passwd joeuser:x:1029:100:Joe User:/home/joeuser:/bin/sh $ ssh joeuser@localhost Last login: Sun Dec 16 15:50:52 2012 from localhost $ To make things even more strange, I can log in as joeuser using /bin/bash using the serial console (!). Also a su - joeuser as root works fine, so the bash binary itself is working fine. In an act of despair, I changed joeuser's uid to 0 on /etc/passwd, but also didn't work, so it doesn't seem to be anything permission related. Seems that bash is doing some extra checking that sshd didn't like, and blocking the connections for non-root users. Maybe some sort of sanity checking - or terminal emulation - that is triggering the SIGCHLD, but only when called via ssh. I already went through every single item on sshd_config, and also put SSHD in debug mode, but didn't find anything strange. Here's my /etc/ssh/sshd_config: LogLevel DEBUG LoginGraceTime 2m PermitRootLogin yes RSAAuthentication yes PubkeyAuthentication yes AuthorizedKeysFile %h/.ssh/authorized_keys ChallengeResponseAuthentication no UsePAM yes AllowTcpForwarding no ChrootDirectory none Subsystem sftp internal-sftp -f DAEMON -u 000 And here's the output from /usr/syno/sbin/sshd -d, showing the failed attempt of joeuser trying to log in, with /bin/bash as the shell: debug1: Config token is loglevel debug1: Config token is logingracetime debug1: Config token is permitrootlogin debug1: Config token is rsaauthentication debug1: Config token is pubkeyauthentication debug1: Config token is authorizedkeysfile debug1: Config token is challengeresponseauthentication debug1: Config token is usepam debug1: Config token is allowtcpforwarding debug1: Config token is chrootdirectory debug1: Config token is subsystem debug1: HPN Buffer Size: 87380 debug1: sshd version OpenSSH_5.8p1-hpn13v11 debug1: read PEM private key done: type RSA debug1: private host key: #0 type 1 RSA debug1: read PEM private key done: type DSA debug1: private host key: #1 type 2 DSA debug1: read PEM private key done: type ECDSA debug1: private host key: #2 type 3 ECDSA debug1: rexec_argv[0]='/usr/syno/sbin/sshd' debug1: rexec_argv[1]='-d' Set /proc/self/oom_adj from 0 to -17 debug1: Bind to port 22 on ::. debug1: Server TCP RWIN socket size: 87380 debug1: HPN Buffer Size: 87380 Server listening on :: port 22. debug1: Bind to port 22 on 0.0.0.0. debug1: Server TCP RWIN socket size: 87380 debug1: HPN Buffer Size: 87380 Server listening on 0.0.0.0 port 22. debug1: Server will not fork when running in debugging mode. debug1: rexec start in 6 out 6 newsock 6 pipe -1 sock 9 debug1: inetd sockets after dupping: 4, 4 Connection from 127.0.0.1 port 52212 debug1: HPN Disabled: 0, HPN Buffer Size: 87380 debug1: Client protocol version 2.0; client software version OpenSSH_5.8p1-hpn13v11 SSH: Server;Ltype: Version;Remote: 127.0.0.1-52212;Protocol: 2.0;Client: OpenSSH_5.8p1-hpn13v11 debug1: match: OpenSSH_5.8p1-hpn13v11 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8p1-hpn13v11 debug1: permanently_set_uid: 1024/100 debug1: MYFLAG IS 1 debug1: list_hostkey_types: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: AUTH STATE IS 0 debug1: REQUESTED ENC.NAME is 'aes128-ctr' debug1: kex: client->server aes128-ctr hmac-md5 none SSH: Server;Ltype: Kex;Remote: 127.0.0.1-52212;Enc: aes128-ctr;MAC: hmac-md5;Comp: none debug1: REQUESTED ENC.NAME is 'aes128-ctr' debug1: kex: server->client aes128-ctr hmac-md5 none debug1: expecting SSH2_MSG_KEX_ECDH_INIT debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: KEX done debug1: userauth-request for user joeuser service ssh-connection method none SSH: Server;Ltype: Authname;Remote: 127.0.0.1-52212;Name: joeuser debug1: attempt 0 failures 0 debug1: Config token is loglevel debug1: Config token is logingracetime debug1: Config token is permitrootlogin debug1: Config token is rsaauthentication debug1: Config token is pubkeyauthentication debug1: Config token is authorizedkeysfile debug1: Config token is challengeresponseauthentication debug1: Config token is usepam debug1: Config token is allowtcpforwarding debug1: Config token is chrootdirectory debug1: Config token is subsystem debug1: PAM: initializing for "joeuser" debug1: PAM: setting PAM_RHOST to "localhost" debug1: PAM: setting PAM_TTY to "ssh" debug1: userauth-request for user joeuser service ssh-connection method password debug1: attempt 1 failures 0 debug1: do_pam_account: called Accepted password for joeuser from 127.0.0.1 port 52212 ssh2 debug1: monitor_child_preauth: joeuser has been authenticated by privileged process debug1: PAM: establishing credentials User child is on pid 9129 debug1: Entering interactive session for SSH2. debug1: server_init_dispatch_20 debug1: server_input_channel_open: ctype session rchan 0 win 65536 max 16384 debug1: input_session_request debug1: channel 0: new [server-session] debug1: session_new: session 0 debug1: session_open: channel 0 debug1: session_open: session 0: link with channel 0 debug1: server_input_channel_open: confirm session debug1: server_input_global_request: rtype [email protected] want_reply 0 debug1: server_input_channel_req: channel 0 request pty-req reply 1 debug1: session_by_channel: session 0 channel 0 debug1: session_input_channel_req: session 0 req pty-req debug1: Allocating pty. debug1: session_new: session 0 debug1: session_pty_req: session 0 alloc /dev/pts/1 debug1: server_input_channel_req: channel 0 request shell reply 1 debug1: session_by_channel: session 0 channel 0 debug1: session_input_channel_req: session 0 req shell debug1: Setting controlling tty using TIOCSCTTY. debug1: Received SIGCHLD. debug1: session_by_pid: pid 9130 debug1: session_exit_message: session 0 channel 0 pid 9130 debug1: session_exit_message: release channel 0 debug1: session_by_tty: session 0 tty /dev/pts/1 debug1: session_pty_cleanup: session 0 release /dev/pts/1 Received disconnect from 127.0.0.1: 11: disconnected by user debug1: do_cleanup debug1: do_cleanup debug1: PAM: cleanup debug1: PAM: closing session debug1: PAM: deleting credentials Here you have the full output of sshd -dd, together with ssh -vv. Bash: # bash --version GNU bash, version 3.2.49(1)-release (arm-none-linux-gnueabi) Copyright (C) 2007 Free Software Foundation, Inc. The bash binary was cross compiled from source. I also tried using a pre-compiled binary from the Optware distribution, but had the exact same problem. I checked for missing shared libraries using objdump -x, but they're all there. Any ideas what could be causing this "Permission denied, please try again."? I'm almost diving in the bash source code to investigate, but trying to avoid hours chasing something that may be silly.

    Read the article

  • Setting up router: I can get online via Ethernet, and I can connect to router wirelessly, but can't

    - by Gabe
    I'm setting up a new D-Link N300 router, and trying to connect a MacBook Pro and a MacBook, both running OS X 10.6. As the question says, either computer can connect to the internet when connected to the router via Ethernet. And either one can get on the router's wireless network. But when connected wirelessly (without the Ethernet cable plugged in), neither one can get online. I can't figure out why that would be. Any help appreciated.

    Read the article

  • How to Pre-Configure Shared Laptops' Microsoft Outlook 2010 Accounts to Connect to Exchange Server 2007 SP3?

    - by schultkl
    Our IT environment provides 10 shared, Microsoft Windows 7 laptops for an office staff of several hundred people. After checking-out and logging into a laptop with an Active Directory domain account, office staff frequently run Microsoft Outlook 2010. However, the first time office staff do this, Microsoft Outlook 2010 prompts the user to create and configure their local account. This takes just several clicks, as Microsoft Outlook 2010 auto-detects the office staff member's Microsoft Exchange Server 2007 (SP3) account. The problem is: all office staff have to do this on each new laptop they use. Until they do so, some functionality does not work (for example, Microsoft Word 2010 Save & Send fails with error "There was a problem creating the message"). How might our IT department "pre-configure" the shared laptops so office staff can simply log-in and use Microsoft Outlook 2010 functionality without the need to configure a local account?

    Read the article

  • How to configure Windows user accounts for ODBC network with NT authentication?

    - by Ian Mackinnon
    I'm trying to create a connection to an SQL Server database from the ODBC Data Source Administrator using "Windows NT authentication using the network login ID". Both server and client are running Windows XP. It appears that any account with administrator privileges can add the data source on the server*, though connection attempts from the client result in error messages that suggest it is trying to authenticate using a guest account. I found a Microsoft support page that says: For SQL Server...: connect using the impersonated user account. But it doesn't offer advice about how to do that. How do I impersonate a user account on the server? or (since it sounds like that would lead to an unfortuante squashing of privileges and loss of accountability): How do I give an account on the client privileges on the server database and then ensure the client attempts authentication with the privileged account and not with a guest account? I'm aware that I'm providing rather sparse information. This is because I'm in unfamiliar territory and don't know what's pertinent. I'll attempt to add any requested information as quickly as possible. *I'm planning on tightening privileges straight after I get it working as it stands.

    Read the article

  • How do I config postfix to use multiple google apps user accounts?

    - by john unkas
    I have a postfix installation and have setup relaying through google apps, but when I send mail to postfix it relays it to google apps using the ONE account I have specified in the main.cf. Is there a way to do this more dynamically. Ideally, the user would authenticate with postfix when sending mail and postfix would use that username and password to authenticate against gmail. Is that possible or what would be the next best solution? Thanks in advance

    Read the article

  • Configuring iPad Mail app & Gmail app with different accounts? [migrated]

    - by Steve Crane
    I prefer to use the Gmail app over the standard Mail app on my iPad for reading my personal Gmail (I delete a lot of mails, newsletters, etc., after reading and this is one tap in Gmail and several in Mail). I have them set up so my personal Gmail uses the Gmail app and my work email is set up to use the standard Mail app. This all works fine except for one problem. If I'm in Gmail or Mail and send an email it sends from the relevant email address as expected. My problem is that when I share something via email from Safari or another app it sends from the email address configured in Settings for Mail (the work one) and I would prefer to do such sharing from my personal email address. Does anyone know if there is a way to achieve this? I could switch the addresses to use the other app but as I never delete work email and delete personal mail at least 50% of the time, the behaviour of the apps is perfect the way I have them set up; if only I could solve that one little problem of controlling where shared items are sent from. I am using an iPad 2 with iOS 5.1 should that be relevant.

    Read the article

  • Central Storage for windows user accounts homedirs .. hardware/software needed?

    - by mtkoan
    We have ~120+ users in our network, and are endeavoring to centralize logon authentication and home directory storage server-side. Most of the users are Windows 2000/XP machines, and a few running Mac OS X. Ideally the solution will be open-source-- can this all be managed from a Linux server running LDAP and Samba? Or would a hacked-NAS Box with a FreeNAS or similar suffice? Or is Micro$oft's Active Directory really the preference here. Is it viable to store PST files on this server for users to read from and write to? They are very large ~1.5gb. We have no mail server (or money) capable of Exchange or IMAP, only an old POP3. What kind of hardware horsepower and network architecture should we have for this kind of thing?

    Read the article

  • How can I move authorized applications between google accounts?

    - by zoopp
    I'm looking into creating an email address with a professional name on gmail and due to the fact that I can't change my current one I have to create new google account. Among some things which which need to be patched (eg. forwarding email to the new address until every other account's email contact address is changed etc.) I came across authorized applications. If I am to use exclusively the new email address I have to somehow move my authorized applications as well since if I am to eventually delete my old account I will lose access to my current profiles created by those applications (eg. the stackexchange network, youtube etc). How can this move be accomplished?

    Read the article

  • How do shared hosting servers keep executing code from crossing accounts?

    - by acidzombie24
    I am kind of curious, how does a hosting server support multiple users with php but keep each user away from the other code? The 'easy' solution i thought were file permissions. So every user can have www-data belong to their group and the server would have executing access but the users cant access the others file. But then i realize the user running the php would be www-data who has permission to read everyones data. So how does a shared host prevent this from happening? PS: I personally use nginx (with fastcgi php). But i am somewhat familiar on how apache works.

    Read the article

  • Easiest way to programmatically create email accounts for use by web app?

    - by cadwag
    I have a web app that create groups. Each group gets their own discussion board. I would like to add the feature of allowing users to send emails to their "group" within the web app to start a new discussion or reply to an email from the "group" to make a new post in an already ongoing discussion. For example, to start a new discussion a user would send: From: [email protected] To: [email protected] Subject: Hey guys! Meet up on Tuesday? Body: Yes? No? All members of the group would receive an email: From: [email protected] Subject: Hey guys! Meet up on Tuesday? Body: Yes? No? Reply-To: group1@ example.com And, the app would start a new discussion with: Author: Bill Fake Subject: Hey guys! Meet up on Tuesday? Body: Yes? No? This is a pretty standard feature for Google Groups and other big sites. So how do us mere mortals go about implementing this? Is there an easy way? Or do I: 1. Install postfix 2. Write scripts to create new accounts for each new group 3. Access the server periodically via pop3 (or imap?) to retrieve the email messages sent to each account? 4. Parse the message for content If it's the latter, did I miss a step?

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part II

    - by dbayard
    Part II – Solving Big Problems with Oracle R Enterprise In the first post in this series (see https://blogs.oracle.com/R/entry/solving_big_problems_with_oracle), we showed how you can use R to perform historical rate of return calculations against investment data sourced from a spreadsheet.  We demonstrated the calculations against sample data for a small set of accounts.  While this worked fine, in the real-world the problem is much bigger because the amount of data is much bigger.  So much bigger that our approach in the previous post won’t scale to meet the real-world needs. From our previous post, here are the challenges we need to conquer: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In this post, we will show how we moved from sample data environment to working with full-scale data.  This post is based on actual work we did for a financial services customer during a recent proof-of-concept. Getting started with the Database At this point, we have some sample data and our IRR function.  We were at a similar point in our customer proof-of-concept exercise- we had sample data but we did not have the full customer data yet.  So our database was empty.  But, this was easily rectified by leveraging the transparency features of Oracle R Enterprise (see https://blogs.oracle.com/R/entry/analyzing_big_data_using_the).  The following code shows how we took our sample data SimpleMWRRData and easily turned it into a new Oracle database table called IRR_DATA via ore.create().  The code also shows how we can access the database table IRR_DATA as if it was a normal R data.frame named IRR_DATA. If we go to sql*plus, we can also check out our new IRR_DATA table: At this point, we now have our sample data loaded in the database as a normal Oracle table called IRR_DATA.  So, we now proceeded to test our R function working with database data. As our first test, we retrieved the data from a single account from the IRR_DATA table, pull it into local R memory, then call our IRR function.  This worked.  No SQL coding required! Going from Crawling to Walking Now that we have shown using our R code with database-resident data for a single account, we wanted to experiment with doing this for multiple accounts.  In other words, we wanted to implement the split-apply-combine technique we discussed in our first post in this series.  Fortunately, Oracle R Enterprise provides a very scalable way to do this with a function called ore.groupApply().  You can read more about ore.groupApply() here: https://blogs.oracle.com/R/entry/analyzing_big_data_using_the1 Here is an example of how we ask ORE to take our IRR_DATA table in the database, split it by the ACCOUNT column, apply a function that calls our SimpleMWRR() calculation, and then combine the results. (If you are following along at home, be sure to have installed our myIRR package on your database server via  “R CMD INSTALL myIRR”). The interesting thing about ore.groupApply is that the calculation is not actually performed in my desktop R environment from which I am running.  What actually happens is that ore.groupApply uses the Oracle database to perform the work.  And the Oracle database is what actually splits the IRR_DATA table by ACCOUNT.  Then the Oracle database takes the data for each account and sends it to an embedded R engine running on the database server to apply our R function.  Then the Oracle database combines all the individual results from the calls to the R function. This is significant because now the embedded R engine only needs to deal with the data for a single account at a time.  Regardless of whether we have 20 accounts or 1 million accounts or more, the R engine that performs the calculation does not care.  Given that normal R has a finite amount of memory to hold data, the ore.groupApply approach overcomes the R memory scalability problem since we only need to fit the data from a single account in R memory (not all of the data for all of the accounts). Additionally, the IRR_DATA does not need to be sent from the database to my desktop R program.  Even though I am invoking ore.groupApply from my desktop R program, because the actual SimpleMWRR calculation is run by the embedded R engine on the database server, the IRR_DATA does not need to leave the database server- this is both a performance benefit because network transmission of large amounts of data take time and a security benefit because it is harder to protect private data once you start shipping around your intranet. Another benefit, which we will discuss in a few paragraphs, is the ability to leverage Oracle database parallelism to run these calculations for dozens of accounts at once. From Walking to Running ore.groupApply is rather nice, but it still has the drawback that I run this from a desktop R instance.  This is not ideal for integrating into typical operational processes like nightly data warehouse refreshes or monthly statement generation.  But, this is not an issue for ORE.  Oracle R Enterprise lets us run this from the database using regular SQL, which is easily integrated into standard operations.  That is extremely exciting and the way we actually did these calculations in the customer proof. As part of Oracle R Enterprise, it provides a SQL equivalent to ore.groupApply which it refers to as “rqGroupEval”.  To use rqGroupEval via SQL, there is a bit of simple setup needed.  Basically, the Oracle Database needs to know the structure of the input table and the grouping column, which we are able to define using the database’s pipeline table function mechanisms. Here is the setup script: At this point, our initial setup of rqGroupEval is done for the IRR_DATA table.  The next step is to define our R function to the database.  We do that via a call to ORE’s rqScriptCreate. Now we can test it.  The SQL you use to run rqGroupEval uses the Oracle database pipeline table function syntax.  The first argument to irr_dataGroupEval is a cursor defining our input.  You can add additional where clauses and subqueries to this cursor as appropriate.  The second argument is any additional inputs to the R function.  The third argument is the text of a dummy select statement.  The dummy select statement is used by the database to identify the columns and datatypes to expect the R function to return.  The fourth argument is the column of the input table to split/group by.  The final argument is the name of the R function as you defined it when you called rqScriptCreate(). The Real-World Results In our real customer proof-of-concept, we had more sophisticated calculation requirements than shown in this simplified blog example.  For instance, we had to perform the rate of return calculations for 5 separate time periods, so the R code was enhanced to do so.  In addition, some accounts needed a time-weighted rate of return to be calculated, so we extended our approach and added an R function to do that.  And finally, there were also a few more real-world data irregularities that we needed to account for, so we added logic to our R functions to deal with those exceptions.  For the full-scale customer test, we loaded the customer data onto a Half-Rack Exadata X2-2 Database Machine.  As our half-rack had 48 physical cores (and 96 threads if you consider hyperthreading), we wanted to take advantage of that CPU horsepower to speed up our calculations.  To do so with ORE, it is as simple as leveraging the Oracle Database Parallel Query features.  Let’s look at the SQL used in the customer proof: Notice that we use a parallel hint on the cursor that is the input to our rqGroupEval function.  That is all we need to do to enable Oracle to use parallel R engines. Here are a few screenshots of what this SQL looked like in the Real-Time SQL Monitor when we ran this during the proof of concept (hint: you might need to right-click on these images to be able to view the images full-screen to see the entire image): From the above, you can notice a few things (numbers 1 thru 5 below correspond with highlighted numbers on the images above.  You may need to right click on the above images and view the images full-screen to see the entire image): The SQL completed in 110 seconds (1.8minutes) We calculated rate of returns for 5 time periods for each of 911k accounts (the number of actual rows returned by the IRRSTAGEGROUPEVAL operation) We accessed 103m rows of detailed cash flow/market value data (the number of actual rows returned by the IRR_STAGE2 operation) We ran with 72 degrees of parallelism spread across 4 database servers Most of our 110seconds was spent in the “External Procedure call” event On average, we performed 8,200 executions of our R function per second (110s/911k accounts) On average, each execution was passed 110 rows of data (103m detail rows/911k accounts) On average, we did 41,000 single time period rate of return calculations per second (each of the 8,200 executions of our R function did rate of return calculations for 5 time periods) On average, we processed over 900,000 rows of database data in R per second (103m detail rows/110s) R + Oracle R Enterprise: Best of R + Best of Oracle Database This blog post series started by describing a real customer problem: how to perform a lot of calculations on a lot of data in a short period of time.  While standard R proved to be a very good fit for writing the necessary calculations, the challenge of working with a lot of data in a short period of time remained. This blog post series showed how Oracle R Enterprise enables R to be used in conjunction with the Oracle Database to overcome the data volume and performance issues (as well as simplifying the operations and security issues).  It also showed that we could calculate 5 time periods of rate of returns for almost a million individual accounts in less than 2 minutes. In a future post, we will take the same R function and show how Oracle R Connector for Hadoop can be used in the Hadoop world.  In that next post, instead of having our data in an Oracle database, our data will live in Hadoop and we will how to use the Oracle R Connector for Hadoop and other Oracle Big Data Connectors to move data between Hadoop, R, and the Oracle Database easily.

    Read the article

  • Opinions on Dual-Salt authentication for low sensitivity user accounts?

    - by Heleon
    EDIT - Might be useful for someone in the future... Looking around the bcrypt class in php a little more, I think I understand what's going on, and why bcrypt is secure. In essence, I create a random blowfish salt, which contains the number of crypt rounds to perform during the encryption step, which is then hashed using the crypt() function in php. There is no need for me to store the salt I used in the database, because it's not directly needed to decrypt, and the only way to gain a password match to an email address (without knowing the salt values or number of rounds) would be to brute force plain text passwords against the hash stored in the database using the crypt() function to verify, which, if you've got a strong password, would just be more effort than it's worth for the user information i'm storing... I am currently working on a web project requiring user accounts. The application is CodeIgniter on the server side, so I am using Ion Auth as the authentication library. I have written an authentication system before, where I used 2 salts to secure the passwords. One was a server-wide salt which sat as an environment variable in the .htaccess file, and the other was a randomly generated salt which was created at user signup. This was the method I used in that authentication system for hashing the password: $chars = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"; //create a random string to be used as the random salt for the password hash $size = strlen($chars); for($i = 0; $i < 22; $i++) { $str .= $chars[rand(0, $size - 1)]; } //create the random salt to be used for the crypt $r_blowfish_salt = "$2a$12$" . $str . "$"; //grab the website salt $salt = getenv('WEBSITE_SALT'); //combine the website salt, and the password $password_to_hash = $pwd . $salt; //crypt the password string using blowfish $password = crypt($password_to_hash, $r_blowfish_salt); I have no idea whether this has holes in it or not, but regardless, I moved over to Ion Auth for a more complete set of functions to use with CI. I noticed that Ion only uses a single salt as part of its hashing mechanism (although does recommend that encryption_key is set in order to secure the database session.) The information that will be stored in my database is things like name, email address, location by country, some notes (which will be recommended that they do not contain sensitive information), and a link to a Facebook, Twitter or Flickr account. Based on this, i'm not convinced it's necessary for me to have an SSL connection on the secure pages of my site. My question is, is there a particular reason why only 1 salt is being used as part as the Ion Auth library? Is it implied that I write my own additional salting in front of the functionality it provides, or am I missing something? Furthermore, is it even worth using 2 salts, or once an attacker has the random salt and the hashed password, are all bets off anyway? (I assume not, but worth checking if i'm worrying about nothing...)

    Read the article

  • Is there any way to synchronize AD users with Office 365 but still be able to edit them online?

    - by Massimo
    I'm performing a migration to Office 365 from a third-party mail server (MDaemon); the local Active Directory doesn't include any Exchange server, and never had any. We will need directory synchronization in order to enable users to log on to Office 365 using their domain credentials; but it seems that as soon as you enable directory synchronization, you can't perform any action anymore on Office 365 users: all changes need to be made on the local Active Directory, and then replicated by the synchronization process. For ordinary users with a single e-mail address and standard features, this is not a big problem; but what about users which need an additional address? What if I need to configure some nonstandard setting, like "hide from address list" or a custom mailbox quota? From what I've gathered, the only supported way to do this, as you can't directly edit Office 365 objects anymore after synchronization is enabled, is to extend the local AD schema with Exchange attributes, and then manually edit them (!). Or, you can install at least one local Exchange server, and then use the Exchange administrative tools to configure the required settings. Is this correct or am I missing something? Is there any way to synchronize user accounts and password, but still be able to edit user settings directly in Office 365? If not (everything really needs to be set locally and then synchronized), is there any simpler way to do this than manually editing LDAP attributes or installing a local Exchange server?

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >