Search Results

Search found 49554 results on 1983 pages for 'database users'.

Page 607/1983 | < Previous Page | 603 604 605 606 607 608 609 610 611 612 613 614  | Next Page >

  • Cannot Install Phusion Passenger 3.0.13 with Nginx 1.2.1

    - by LightBe Corp
    I installed gem Passenger which installed 3.0.13. Then I executed passenger-install-nginx-module which is what the Nginx instructions on http://www.modrails.com said to do. It installs the latest stable version which is 1.2.1 according to the Nginx official wiki page. I said to install Nginx to /usr/local/nginx (which is the default if you go to the nginx wiki website). I get the following errors: Undefined symbols for architecture x86_64: "_pcre_free_study", referenced from: _ngx_pcre_free_studies in ngx_regex.o ld: symbol(s) not found for architecture x86_64 collect2: ld returned 1 exit status make[1]: *** [objs/nginx] Error 1 make: *** [build] Error 2 -------------------------------------------- It looks like something went wrong Please read our Users guide for troubleshooting tips: /Users/server1/.rvm/gems/[email protected]/gems/passenger-3.0.13/doc/Users guide Nginx.html If that doesn't help, please use our support facilities at: http://www.modrails.com/ We'll do our best to help you. I have done searches for several hours trying to find a resolution. I tried the Google Group for Phusion Passenger but did not find anything. I do not know if there is a mismatch in version numbers or not. The documentation says nothing about this error.

    Read the article

  • What are the "least legally restrictive" well-connected countries to host a website?

    - by monster
    NB: I am aware that this question is subjective, as it can't be defined precisely, but the answers should still be "objective": Country name, and what makes it legally safer. EDIT: A) I am located in Germany. B) I am NOT looking for a place to offer pirated Software/Media; no binary on my site, except "profile icon". Hello! I want to start publishing "social" websites / apps, and I found that the biggest initial problem is this: Any and all services I have to depend on, including Domain Registrar, DNS provider, Server/Cloud Provider, CDN Provider, ... even my Insurance Agent, basically say that they can "throw me out" if my website contains "unacceptable" content. It's always phrased in such a way that basically anything can fall under "unacceptable" content. This is very frustrating because you just can't fully control what users post on your "social website", and you so you basically have to expect when you go to bed that your site is going to be gone when you wake up. I've heard a lot of horror stories about this. Since the "Terms Of Service" of all those providers are foremost to protect themselves from legal actions, and those legal actions depend on the country where they are located, it seems like the first step is to find which country is the "safest" to locate a site. "Safest" being defined as, where I am least likely to get in legal trouble with the local authorities, if some user posts something unacceptable in some way. The main restriction is that it should also be a "well-connected" country, because there is no point in being "safe", if my users can't get to my sites, or the latency is unacceptable. I am targeting the English speaking people in any country as my future users.

    Read the article

  • Troubleshoot odd large transaction log backups...

    - by Tim
    I have a SQL Server 2005 SP2 system with a single database that is 42gigs in size. It is a modestly active database that sees on average 25 transactions per second. The database is configured in Full recovery model and we perform transaction log backups every hour. However it seems to be pretty random at some point during the day the log backup will go from it's average size of 15megs all the way up to 40gigs. There are only 4 jobs that are scheduled to run on the SQL server and they are all typical backup jobs which occur on a daily/weekly basis. I'm not entirely sure of what client activity takes place as the application servers are maintained by a different department. Is there any good way to track down the cause of these log file growths and pinpoint them to a particular application, or client? Thanks in advance.

    Read the article

  • Drupal 7: One-time user account

    - by Noob
    I'm going to create a survey in Drupal 7 with the webform module, installed on a debian system which may be adapted in every way. The users (personally known, approx. 120) doing that survey will walk into a room and complete the survey in browsers on different computers. After that, they'll leave the room and other persons will enter, complete the survey on the same computers and so on. Each user may enter only one submission. The process needs to be anonymous, i. e. I mustn't have any idea of who did wich submission. My current solution is to generate random one-time-passwords and hand out one password per user (without noting who got which password). Within the survey there will be a password field where the one-time-password is entered. The value is checked by webform to be unique. I'll get the data via csv or Excel and verify the passwords manually in excel by comparing them to the list of valid passwords. The problem is: I don't like the idea of manually generating the password list, copying it to excel and doing a manual check. That's a good idea for one-time-use, but we're going to repeat the survey every once in a while. I'd rather generate one-time-logins (like user0001/fdlkjewf, user0002/dfrefnnr, ...) for each survey, hand them out to the users and let drupal/debian/whatever check whether a submission is valid or not. Do you have any idea how to batch-generate about 120 users with one-time-passwords in Drupal 7 and verify that each user may submit the form only once? Do you even have a better idea how to accomplish the task within the intranet? Thank you for your help.

    Read the article

  • Use WSUS without client configuration

    - by sc911
    Hello *, is there any way to let client-PCs use the local WSUS-server without having to configure them? What we need is a system to update PCs before they are delivered to the users. So the WSUS-server is accessible only within our lab, not later on at the users place. We'd like to use WSUS because it will fasten up the download very much. And we don't like to modify the clients as those changes might be forgotten to remove and then at the users place no update will be possible. So the easiest way would be, if one could redirect the normal Microsoft update, but I'm pretty sure that this will not be possible as this update will not be WSUS compliant. An other option I thought of might be, that the DHCP delivers an extra option telling the clients where to get the updates. But I could not find any information about this, so it looks like that this isn't possible too. So, is there any way? Or will it be easier to use a little script to change the WSUS-entries automatically? Regards sc911

    Read the article

  • Roaming Profiles & Redirected Folders - storage consumption? offline files and caching?

    - by Ben Swinburne
    I understand the concepts of both roaming profiles and folder redirection and have used both separately before. I am about to set up a network from scratch and would ideally like to use both for the following reasons primarily Roaming profiles allow users to log on to any machine and have their profile Redirected profiles allow users to have their My Documents and Desktop etc backed up without the need to log off at the end of the day. The servers can run their backups overnight and there are no missing files due to the user not logging off. Redirected profiles largely alleviate the slow log in times caused by large profiles. My question is if some of the folders are redirected and therefore not part of the roaming profile what happens on machines which truly roam (i.e. laptops)? If there's offline files or a cache does this mean that the problem whereby a user has to log off comes back? By having them both enabled, is there any duplication i.e. if I have a users$ share and a profiles$ share would I have Desktop twice for example?

    Read the article

  • Legal IT documents

    - by TylerShads
    I have been wondering this past week because my big boss told me to start keeping track of all the things I have fixed, how to fix them, etc. Which is reasonable and have been doing anyway. But then a related question came to mind. What kind of documentation should I have on hand as far as users go. More specifically I am talking in terms of EULA, ToC, etc (correct me please if I'm using the wrong terms) Or more specifically a policy, so to speak, for the users and such. Can't say I'm a legal expert, otherwise I'd be a lawyer. The environment the users are in is pretty laid back so I don't forsee a problem. But assume that there should ever arise a problem, what should I have written up/have on hand? EDIT: I really should have noted that we are a medical transport facility and have patient records so I know that something must be done there to comply with HIPAA policies I believe. I do like what anthonysomerset said about the "If I get by a bus" Scenario and want to apply it not only to the documentation I am currently writing but also for if say an employee were to steal info from the server or edge cases, theft, etc. As far as our staff, its relatively small as in a single HR person, no legal department aside from the 2 owners' lawyers and me being the only IT person on staff with a guy who is no more than a mac superuser.

    Read the article

  • Automating first time login process in Windows Server 2008 R2 SP1 virtual machine

    - by George Durzi
    I have a set of Windows 2008 Server R2 SP1 Enterprise Edition virtual machines running in Hyper-V. The host server has 64GB of RAM and two SSD drives (one drive for the host OS, and the second one for the VMs). The virtual machines are as follows: Domain Controller: 4GB RAM Exchange Server: 4GB RAM Terminal Services: 50GB RAM We use this setup for a travelling training class where users remote desktop to one of the VMs - let's call it the Terminal Services or "TS" VM - where tools such as Visual Studio are installed. The students go through some labs on the TS VMs in Visual Studio. Overall, this setup works great. However, when users are collectively logging in for the first time, the VM really struggles to keep up while all the user profiles are created. It can take some users up to 10 minutes to login. The number varies from 30 to 40 students. A workaround to this would be to manually remote desktop to the TS virtual machine using all the accounts to ensure that the local profile is created in advance. I'm looking for a way to automate the first time login process on the TS virtual machine. I am envisioning iterating through the accounts in a certain Active Directory OU, and then somehow initiating a remote desktop session to the TS VM to log them in for the first time. Are there ways to do this? Thanks

    Read the article

  • EFS Remote Encryption

    - by Apoulet
    We have been trying to setup EFS across our domain. Unfortunately Reading/Writing file over network share does not work, we get an "Access Denied" error. Another worrying fact is that I managed to get it working for 1 machine but no other would work. The machines are all Windows 2008R2, running as VM under ESXi host. According to: http://technet.microsoft.com/en-us/library/bb457116.aspx#EHAA We setup the involved machine to be trusted for delegation The user are not restricted and can be trusted for delegation. The users have logged-in on both side and can read/write encrypted files without issues locally. I enabled Kerberos logging in the registry and this is the relevant logs that I get on the machine that has the encrypted files. In order for all certificate that the user possess (Only Key Name changes): Event ID 5058: Audit Success, "Other System Events" Key file operation. Subject: Security ID: {MyDOMAIN}\{MyID} Account Name: {MyID} Account Domain: {MyDOMAIN} Logon ID: 0xbXXXXXXX Cryptographic Parameters: Provider Name: Microsoft Software Key Storage Provider Algorithm Name: Not Available. Key Name: {CE885431-9B4F-47C2-8415-2D766B999999} Key Type: User key. Key File Operation Information: File Path: C:\Users\{MyID}\AppData\Roaming\Microsoft\Crypto\RSA\S-1-5-21-4585646465656-260371901-2912106767-1207\66099999999991e891f187e791277da03d_dfe9ecd8-31c4-4b0f-9b57-6fd3cab90760 Operation: Read persisted key from file. Return Code: 0x0[/code] Event ID 5061: Audit Faillure, "System Intergrity" [code]Cryptographic operation. Subject: Security ID: {MyDOMAIN}\{MyID} Account Name: {MyID} Account Domain: {MyDOMAIN} Logon ID: 0xbXXXXXXX Cryptographic Parameters: Provider Name: Microsoft Software Key Storage Provider Algorithm Name: RSA Key Name: {CE885431-9B4F-47C2-8415-2D766B999999} Key Type: User key. Cryptographic Operation: Operation: Open Key. Return Code: 0x8009000b Could this be related to this error from the CryptAcquireContext function NTE_BAD_KEY_STATE 0x8009000BL The user password has changed since the private keys were encrypted. The problem is that the users I using at the moment can not change their password.

    Read the article

  • Powershell SQL query--connection string

    - by sean
    I am trying to query several different SQL servers and run a command on each of them. I am unable to get the connection string right. Code, below. I receive the following error:Login failed. The login is from an untrusted domain and cannot be used with Windows authentication. I thought if I passed it the credentials it wouldn't care about the domain. How do I get around this? Thanks in advance. $serverList = @(Get-Content "c:\AllServers.txt") $query = "SELECT COUNT(thing) AS [RowCount] FROM My_table" $Database = "My_DB" # Read a file foreach ( $svr in $serverList ) { $conn=new-object System.Data.SqlClient.SQLConnection $ConnectionString = "Server={0};Database={1};User ID=sa;Password=Password;Integrated Security=True" -f $svr, $Database $conn.ConnectionString=$ConnectionString $conn.Open() $cmd=new-object system.Data.SqlClient.SqlCommand($Query,$conn) $conn.Close() }

    Read the article

  • How to setup NTFS ACL with Acces Based Enumeration

    - by Patrick Pellegrino
    We're in the process of migrating from Novell Netware to Windows 2K8 R2 infrastructure (AD, File server, print server... etc) My question is about ACL. While Netware and Windows are totally different, I want to be sure my thnking is good before screwing everything up! There's a scenario : F: | +-- DATA <= Shared as DATA with Access based enumeration | +-- Folder 1 +-- Team 1's Folder +-- Team 2's Folder ... In that case, by default, rights are herited from the F: to the deepest folders. What we want : Administrators group have full control top - down. From DATA, ABE list only folders that users have access. (ex. : I'm in group Team 2, I see Team 2's Folder). From what I understand, at DATA I remove all NTFS ACL to be herited (ex. Users Group), be sure to keep Administrators Group and SYSTEM user. After that, grant Full control (or any right needed) on each folder to Groups or Users that have to have access. Does I'm wrong ? Anything I should take care of ? Any help to my understanding will be very appreciated. Regards.

    Read the article

  • Anonymous file sharing without login window, from Windows 7 server to XP clients

    - by Niten
    I'm trying to provide machines on a small LAN with read-only, anonymous access to files shared from a Windows 7 workstation (let's call it WIN7SVR). In particular, I don't want clients to have to deal with a login window when they navigate to, e.g., \\WIN7SVR in Windows Explorer, but we do not have a domain and synchronizing accounts between the server and clients would be intractable. There are both Windows 7 and Windows XP clients that need access to these shares. I got this working for Windows 7 clients by just enabling the Guest account on WIN7SVR and setting appropriate share permissions. Other Windows 7 machines automatically try logging in as Guest, it seems, so their users don't have to deal with the login window. The problem is with the XP clients--they can access the server if the user enters "Guest" in the login window, but I don't want users to have to do that. So from what I gather, in my limited understanding of Windows file sharing, this boils down to granting null sessions access to file shares on WIN7SVR. But I've had no success so far on that front. I've tried all the following in the local group policy editor on the Windows 7 server: Set Network access: Let Everyone permissions apply to anonymous users to Enabled Set Network access: Restrict anonymous access to Named Pipes and Shares to Disabled Added the names of corresponding shares to Network access: Shares that can be accessed anonymously Added "ANONYMOUS LOGON" to Access this computer from the network under User Rights Assignment Any advice would be highly appreciated... I'm mostly a Unix guy, so I feel somewhat out of my league with Windows file sharing. I do understand that any sort of anonymous access to file shares isn't generally ideal from a security standpoint, but it's the most practical solution for us in this case, and access to our network is well enough controlled that share-level security isn't a concern.

    Read the article

  • Is there a way to prevent password expiration when user has no password?

    - by Eric DANNIELOU
    Okay, we all care about security so users should change their passwords on a regular basis (who said passwords are like underwear?). On redhat and centos (5.x and 6.x), it's possible to make every real user password expires after 45 days, and warn them 7 days before. /etc/shadow entry then looks like : testuser:$6$m8VQ7BWU$b3UBovxC5b9p2UxLxyT0QKKgG1RoOHoap2CV7HviDJ03AUvcFTqB.yiV4Dn7Rj6LgCBsJ1.obQpaLVCx5.Sx90:15588:1:45:7::: It works very well and most users often change their passwords. Some users find it convenient not to use any password but ssh public key (and I'd like to encourage them). Then after 45 days they can't log in as they forgot their password and are asked to change it. Is there a way to prevent password expiration if and only if password is disabled? Setting testuser:!!:15588:1:45:7::: in /etc/shadow did not work : testuser is asked to change his password after 45 days. Of course, setting back password expiration to 99999 days works but : It requires extra work. Security auditors might not be happy. Is there a system wide parameter that would prompt the user to change expired password only if he really has one ?

    Read the article

  • Moving FederatedEmail/SystemMailbox from One Store to Another - Exchange 2010

    - by ThaKidd
    Hello all. Just upgraded from Exchange 2003 to 2010. Somehow, I have two mailbox databases on my single Exchange 2010 server. One database contains all of the mailboxes I had moved from the 2003 exchange server; the other contains two SystemMailboxes and one FederatedEmail box. I am just starting to get a grasp on the commands used in the EMS. I was wondering if someone could point me in the right direction to move these three "system" mailboxes into my actual mailbox database so I can eliminate the second database. Just trying to sure up this one server before I role out my backup Exchange server. Thanks in advance! Your help and ideas are greatly appreciated as I try to make this setup as simple as possible.

    Read the article

  • Ruby on Rails - How to migrate code from float to decimal?

    - by user1723110
    So I've got a ruby on rails code which use float a lot (lots of "to_f"). It uses a database with some numbers also stored as "float" type. I would like to migrate this code and the database to decimal only. Is it as simple as migrating the database columns to decimal (adding a decimal column, copying float column to decimal one, deleting float column, renaming decimal column to old float column name), and replacing "to_f" with "to_d" in the code? Or do I need to do more than that? Thanks a lot everyone Raphael

    Read the article

  • Old scheduled task still being started, but can't find it.

    - by JvO
    System: Windows XP Home Summary: Some scheduled task is still being started by Windows, but I can't find it, nor determine where its configuration has been stored. This is turning into a mystery for me... I set up a Windows XP Home machine to run a task at 7:00 AM, using the Task manager. This was a clean install, no users defined, so you got straight to the desktop after starting the machine. The filesystem uses NTFS. Later on, I needed to introduce users, so I created one (named Sam) with administrator privileges. After this I noticed that the scheduled task failed, most likely due to privilege errors (i.e. can't write to a network drive). So I want to delete the old task, and add it again with the correct user credentials. However.... I can't find the old task!! I know it is still being executed at 7:00 AM, but there's no mention anywhere on the system of this task. I've looked in c:\windows\tasks for .job files, but there's only the "MP Scheduled Scan.job" from Security Essentials. I've searched the whole disk for mention of the batch file that is being run, but can't find it. So why is this old task still running, and more importantly, why can't I find it? Would it have something to do with introducing users on XP?

    Read the article

  • Windows Server - share files without access for administrator

    - by Pawel
    We have a MS Windows Server 2008 R8 based server that is administrated by our IT department. We would like to achieve two things simultaneously: A folder on the server, containing several thousand files (new files added frequently) that is accessible to some ActiveDirectory users (e.g. board of directors) but is not accessible by IT department employees IT department employees still maintain rights to administrate the server, including installing new software and services We already checked some solutions: Using NTFS access rights. Unfortunately IT (members of "Administrators" group) can set themselves as new owners of the files and change the permissions so that they gain access to the files. Enabling EFS. Unfortunately even if you do not allow IT to access files, they still can disable EFS completely because they have administrative rights. Moreover as far as I know you have to manually add permissions for all users but the owner for each new file - very inconvenient. Creating a new role for the IT department that has all the privileges apart from taking ownership of files. Unfortunately if you're not a member of the Administrators group, you cannot install new software, no matter what privileges you add to the role. TrueCrypt - nice free encryption software, but with poor sharing capabilities. You can either mount an encryption container on the server (and then IT has access to its contents) or you mount them locally but only one user can mount it for writing. AxCrypt - free encryption software that enables file-by-file encryption on the server. There are some disadvantages though - you have to manually encrypt each new file added. The files have their extensions changes. You can only set one password for all files (so all users have to know this one password). Any other ideas? Our budget is limited so enterprise-class software from Symantec or PGP would probably be not an option.

    Read the article

  • Server configration for our website [duplicate]

    - by Varun Varunesh
    This question already has an answer here: Can you help me with my capacity planning? 2 answers We are a start-up and 6 month back we have launched our beta version website. Now we are in a phase of building our website and web-services for the final product. This website will be based on PHP, Python, MySql database and with wamp server. Right now in the beta version we are using Azure VM for hosting, with configuration of 786MB RAM and Shared CPU. We have 200 avg users daily coming to our website. Now we are trying to increase the number of users from 200 to 1500 daily users. And I am thinking our server should have capability to handle at least 100 concurrent user. Also we have developed web-services for our mobile-apps. Which can also increase loads on the sever. So here are the question that takes me here, I am pretty much confused about whether to go with shared hosting or VM based hosting. If VM, then what configuration will be best for our requirement (as I discussed above) ? Currently our VM is a Windows based server and its very simple to manage, So other than cost factor why should I go for Linux based sever? What other factor should I keep in mind while choosing the server as per our requirement ?

    Read the article

  • How to Deploy an ASP.NET Web API- and Browser-based Application to a Production Environment [closed]

    - by lmttag
    Possible Duplicate: How to Deploy an ASP.NET Web API- and Browser-based Application to a Production Environment We have an ASP.NET Web API server that serves up a SQL Server data driven website. The API uses JSON to transfer data from SQL Server to the front end. We need to move it to an internal production environment (nothing will be exposed on the public Internet) and we’re having problems - or just not understanding what needs to be done. There are two domains: The corporate domain - where all users login normally. The process domain - contains the database the Web API needs to access. The IT staff wants to put a DMZ between the two domains to house the IIS app and shield the users on the corporate domain from having access into the process domain directly. The ideal configuration is: corp domain (end users) <–> firewall (open port 80) <–> DMZ (web server running IIS) <–> firewall (open port 80 or 1433????) <–> process domain (IIS for Web API and SQL Server) We don’t really understand how to deploy our browser/Web API application in this scenario. Do we need to break up our application so that all the client code is on the IIS server in the DMZ, while the Web API gets installed on the server in the process domain? Does the entire app (client code and Web API) stay together on the IIS server in the DMZ, which then somehow accesses the SQL Server instance to get data? From the IIS server and app in the DMZ, would you simply access the Web API on the server in the process domain by going to http://server/appname/api/getitmes? In the second firewall between the DMZ and the process domain, would you have to open port 1433 or just port 80 since the Web API is a HTTP endpoint? Or, is there some better way of deployment (i.e., how ASP.NET Web API single page applications written all in HTML5 and JavaScript supposed to be deployed to production environments?)? NB: The servers are Win2k8 R2, SQL Server 2k8 R2, and IIS 7.5.

    Read the article

  • EC2 configuration for medium load service on Django

    - by Luberg
    I have created a very basic Django application which puts an email to the database (Coming soon page for a startup). I launched a t1.micro instance to try out which load it can carry out. Nginx+FastCGI from Django+sqllite/postgres - tried both. blitz.io test gave me a pretty unhappy result (just 100 users within 1 minute): This rush generated 542 successful hits in 1.0 min and we transferred 809.01 KB of data in and out of your app. The average hit rate of 8.81/second translates to about 761,612 hits/day. You got bigger problems though: 87.28% of the users during this rush experienced timeouts or errors! I tried both to put varnish, disabled Debub mode in django and started fastcgi in threaded mode - nothing helps. This is not gonna be a super highload page - just a coming soon page to save email of subscribers, it should carry at least 500-1000 users at the same time in peak... I believe t1.micro is super small for that, but I also have tried small instance - not better result.. Please let me know should I use something different from Amazon EC2, or to pick smth better than t1.micro, or I that is definetely a configuration issues?...

    Read the article

  • secure user-authentication in squid

    - by Isaac
    once upon a time, there was a beautiful warm virtual-jungle in south america, and a squid server lived there. here is an perceptual image of the network: <the Internet> | | A | B Users <---------> [squid-Server] <---> [LDAP-Server] When the Users request access to the Internet, squid ask their name and passport, authenticate them by LDAP and if ldap approved them, then he granted them. Everyone was happy until some sniffers stole passport in path between users and squid [path A]. This disaster happened because squid used Basic-Authentication method. The people of jungle gathered to solve the problem. Some bunnies offered using NTLM of method. Snakes prefered Digest-Authentication while Kerberos recommended by trees. After all, many solution offered by people of jungle and all was confused! The Lion decided to end the situation. He shouted the rules for solutions: Shall the solution be secure! Shall the solution work for most of browsers and softwares (e.g. download softwares) Shall the solution be simple and do not need other huge subsystem (like Samba server) Shall not the method depend on special domain. (e.g. Active Directory) Then, a very resonable-comprehensive-clever solution offered by a monkey, making him the new king of the jungle! can you guess what was the solution? Tip: The path between squid and LDAP is protected by the lion, so the solution have not to secure it. Note: sorry for this boring and messy story! /~\/~\/~\ /\~/~\/~\/~\/~\ ((/~\/~\/~\/~\/~\)) (/~\/~\/~\/~\/~\/~\/~\) (//// ~ ~ \\\\) (\\\\( (0) (0) )////) (\\\\( __\-/__ )////) (\\\( /-\ )///) (\\\( (""""") )///) (\\\( \^^^/ )///) (\\\( )///) (\/~\/~\/~\/) ** (\/~\/~\/) *####* | | **** /| | | |\ \\ _/ | | | | \_ _________// Thanks! (,,)(,,)_(,,)(,,)--------'

    Read the article

  • The Story of secure user-authentication in squid

    - by Isaac
    once upon a time, there was a beautiful warm virtual-jungle in south america, and a squid server lived there. here is an perceptual image of the network: <the Internet> | | A | B Users <---------> [squid-Server] <---> [LDAP-Server] When the Users request access to the Internet, squid ask their name and passport, authenticate them by LDAP and if ldap approved them, then he granted them. Everyone was happy until some sniffers stole passport in path between users and squid [path A]. This disaster happened because squid used Basic-Authentication method. The people of jungle gathered to solve the problem. Some bunnies offered using NTLM of method. Snakes prefered Digest-Authentication while Kerberos recommended by trees. After all, many solution offered by people of jungle and all was confused! The Lion decided to end the situation. He shouted the rules for solutions: Shall the solution be secure! Shall the solution work for most of browsers and softwares (e.g. download softwares) Shall the solution be simple and do not need other huge subsystem (like Samba server) Shall not the method depend on special domain. (e.g. Active Directory) Then, a very resonable-comprehensive-clever solution offered by a monkey, making him the new king of the jungle! can you guess what was the solution? Tip: The path between squid and LDAP is protected by the lion, so the solution have not to secure it. Note: sorry if the story is boring and messy, but most of it is real! =) /~\/~\/~\ /\~/~\/~\/~\/~\ ((/~\/~\/~\/~\/~\)) (/~\/~\/~\/~\/~\/~\/~\) (//// ~ ~ \\\\) (\\\\( (0) (0) )////) (\\\\( __\-/__ )////) (\\\( /-\ )///) (\\\( (""""") )///) (\\\( \^^^/ )///) (\\\( )///) (\/~\/~\/~\/) ** (\/~\/~\/) *####* | | **** /| | | |\ \\ _/ | | | | \_ _________// Thanks! (,,)(,,)_(,,)(,,)--------'

    Read the article

  • Is it faster to create indexes before or after data loading in MySQL?

    - by Josh Glover
    I have a data replication process that drops and recreates a few tables in a target database, then loads them up with data from a source database (running on another host, but that is immaterial to the question at hand). The target database does need primary keys and a few other indexes on its tables, but not during the data loading. I'm currently loading all of the data, then creating the indexes. However, index creation takes a pretty long time--30 minutes of my data loader's 5 and a half hour running time. My intuition tells me that creating the indexes at the end should be faster than creating them first, since the index would need to be rewritten with each insert. Can anyone tell me for sure which way is faster? FWIW, I'm running MySQL 5.1 with InnoDB tables.

    Read the article

  • Sql Server 2008 Create Foreign Key Manually

    - by tgriffiths
    I have inherited an old database which wasn't designed very well. It is a Sql Server 2008 database which is missing quite a lot of Foreign Key relationships. Below shows two of the tables, and I am trying to manually create a FK relationship between dbo.app_status.status_id and dbo.app_additional_info.application_id I am using SQL Server Management Studio when trying to create the relationship using the query below USE myDatabase; GO ALTER TABLE dbo.app_additional_info ADD CONSTRAINT FK_AddInfo_AppStatus FOREIGN KEY (application_id) REFERENCES dbo.app_status (status_id) ON DELETE CASCADE ON UPDATE CASCADE ; GO However, I receive this error when I run the query The ALTER TABLE statement conflicted with the FOREIGN KEY constraint "FK_AddInfo_AppStatus". The conflict occurred in database "myDatabase", table "dbo.app_status", column 'status_id'. I am wondering if the query is failing because each table already contains approximately 130,000 records? Please help. Thanks.

    Read the article

  • Can a Windows Domain play along with a Hosted Exchange service?

    - by benzado
    I'm setting up a computer network for a small (10-20 people) company. They are currently using a Hosted Exchange service they are totally happy with. Other than that, they are starting from scratch (office doesn't even have furniture yet). They will need some kind of file sharing server set up in their office. If I set up a machine as a file server and nothing more, users will have three passwords to deal with: local machine, file server, and email. If I set up a Domain Controller, identities for local machine and file server will be the same. But what about the Hosted Exchange server? Must the users have a separate email password, or is it possible to combine the two? (I realize it might depend on the specific hosting provider, but is it possible?) If not, it seems like I have these options: Deal with it: users have a separate email password. Host Exchange on the local server: more than they want to manage in-house? Purchase a hosted VPS, make it part of the domain, and host Exchange there. (Or can/should a VPS be a domain controller?) I realize I have a lot of questions in there. The main one: is there any reason to use a Hosted Exchange service if I'm setting up other Windows services?

    Read the article

< Previous Page | 603 604 605 606 607 608 609 610 611 612 613 614  | Next Page >