Search Results

Search found 22998 results on 920 pages for 'supervised users'.

Page 313/920 | < Previous Page | 309 310 311 312 313 314 315 316 317 318 319 320  | Next Page >

  • How to enable telnet with port 3306 during Master to master replication on MySQL Server

    - by Mainio
    I am trying to do Master to Master Replication in Windows Server 2008. I am successfully able to replicate all the database of Master 1 to Master 2. But I am unable to replicate the changes made on Master 2 to Master 1. Later on I found that, I can telnet to Master 1 from Master 2 with port 3306 but I am not able on telnet from Master 1 to Master 2. When I check netstat on both Master. I found the following result. I couldn't publish my public IP so I put name as Master 1 and Master 2 for their respective IP Master 1 C:\Users\XXXXX>netstat Active Connections Proto Local Address Foreign Address State TCP Master 1:3306 Master 2:61566 ESTABLISHED TCP Master 1:3389 My remote:56053 ESTABLISHED TCP 127.0.0.1:3306 Master 1:60675 ESTABLISHED TCP 127.0.0.1:3306 Master 1:60712 ESTABLISHED TCP 127.0.0.1:60675 Master 1:3306 ESTABLISHED TCP 127.0.0.1:60712 Master 1:3306 ESTABLISHED Master 2 C:\Users\XXXX>netstat Active Connections Proto Local Address Foreign Address State TCP Master 2:3389 My remote:56124 ESTABLISHED TCP Master 2:61566 Master 1:3306 ESTABLISHED TCP Master 2:61574 bil-sc-cm02:http ESTABLISHED TCP 127.0.0.1:3306 Master 2:61562 ESTABLISHED TCP 127.0.0.1:3306 Master 2:61563 ESTABLISHED TCP 127.0.0.1:61562 Master 2:3306 ESTABLISHED TCP 127.0.0.1:61563 Master 2:3306 ESTABLISHED TCP 127.0.0.1:61573 Master 2:3306 TIME_WAIT All shows that In my master 2, port 3306 is not activate. Now I need solution over here. How can I figure it. Your small suggestion would be million for me. Thank you Regards, Udhyan

    Read the article

  • How to use a local Leopard Server Mail server acting "like" an Exchange mail server

    - by Richard Chevre
    We have a local Exchange 2003 server (company .local) who is collecting POP3 mail accounts on a distant (company .com) mailserver. The mails are collected by the Exchange server every 5-10 minutes and stored locally (on company .local), so the users can read them without going on the "real" mail server (company.com) What was explaned to me is that the mail collection is made with POP Now we are migrating on Snow Leopard Server. We have chosen to use a new extension for our local domain: .leo So our mailserver's FQDN is mail.company.leo, and the users have a user [email protected] formated mail address. A) All works fine except that I can't find how to tell the mail.company.leo that he must retreive the mails from the "real" public server (mail.company.com) I'm hoping to use IMAP and not POP. I can send mail using SMTP relay from mail.company.leo but (I know it's trivial) answering is not possible, even if I specify the reply-to as [email protected] (this seems to be related to A) ) I don't know if it's very complicated (I suspect not, but...) to achieve what I want to do, and I'm not a genius. But as I'm a little bit lost, I hopesomebody can or will help me. Solving this will allow us to use iCal invitations too, so a lot of services depends of these mailserver settings Some of you discuss the fact thta we choose to use a "new" tld with the .leo extension. We have no problem for that, we could use .local. no problem ;) We used .leo instead of .local just to differentiate the two systems (Exchange and SnowLeopardServer). The question was not about that, it was just to know if we can set a SnowLeopard mail server to act like an Exchange Server. Again thank you for your advice and help Richard Thanks in advance Richard

    Read the article

  • Nagios 403 forbidden, indexes?

    - by Georgi
    installed nagios under freebsd 9, but can't get the right way to be public in browser (from other pc's). I think that the problem is in the indexes or that there is not index file (instead main.php). Apache says that syntax is ok. The permissions of the dir are 777. The logs print Directory index forbidden by Options directive: /usr/local/www/nagios/. This is my configuration: ScriptAlias /nagios/cgi-bin/ /usr/local/www/nagios/cgi-bin/ Alias /nagios /usr/local/www/nagios/ <Directory /usr/local/www/nagios> Options +Indexes FollowSymLinks +ExecCGI AllowOverride Indexes AuthConfig FileInfo Order allow,deny Allow from all AuthName "Nagios Access" AuthType Basic AuthUSerFile /usr/local/etc/nagios/htpasswd.users Require valid-user </Directory> <Directory /usr/local/www/nagios/cgi-bin> Options +ExecCGI AllowOverride None Order allow,deny Allow from all AuthName "Nagios Access" AuthType Basic AuthUSerFile /usr/local/etc/nagios/htpasswd.users Require valid-user </Directory> I think that the problem is in idexes, maybe? When I remove the options it's public and available but lists the files and says that idnexes are forbidden..

    Read the article

  • DFSR NTFS Permissions Not Working!??!

    - by megadood
    I have two windwos 2008 standard servers running DFSR okay. I can create a file on one server, it is replicated to the other okay etc. I have the namespace shared folder on each server shared with full control administrators / everyone change/read permissions. I then browse to the folder on server 1 e.g.\server1\namespace\share\folder1. I right click the folder, and configure the NTFS permissions as I would like for example Adminsitrators Full Control / One User Read/Write Access / No other users in the user list. I save this and then double check the second server e.g. \server2\namespace\share\folder1. I right click the same folder name as before and can see the NTFS permissions have replicated accordingly. I right click the folder and go to properties - security - advanced - effective permissions and select a user that shouldnt be able to get into that folder e.g. testuser. It agrees with the NTFS permissions and shows that testuser has no ticks next to any permissions so should be denied access. I logon to any network PC or the server as testuser. Browse to \server1\namespace\share\folder1. It lets me straight in, no access denied messages. The same applies to server2. It seems as thought all my NTFS permissions are being ignored. I have 1 DFS share and then all the subfolders are a mixture of private folders and public folders so need the NTFS permissions to work ideally. Any idea whats going on? Is this normal? From my tests all users can access any DFSR folder under the namespace\share which is quite worrying. Thanks

    Read the article

  • Recommendations for secure business collaboration tools

    - by Michael Prescott
    I'm searching for a secure and easy way for business partners to collaboratively edit and exchange documents, share calendars, create schedules, and assign tasks. I speculate that the ideal collaboration environment or work-flow would actually involve several technologies and services. My co-workers and I have tried a variety of things from Google Apps to Wiki's, but nothing feels very fluid or complete. I suppose defining what we need and our constraints is probably in order: collaboratively edit basic text documents and spreadsheets exchange documents like flow-charts, graphs, and files generated by our other desktop applications, but not source code assign tasks to each other and ourselves and track the history of those tasks easily see when relevant documents have been modified since last viewing and ability to easily push notifications to relevant workers (a clean front page that shows updates would probably suffice) provide limited access to contract workers and guests users if a remote user system is compromised (keystroke logger or other spyware) we don't want the criminal to be able to gain access to all business documents (processes, trade-secrets, customer lists, etc.) simply because they gained access to a single Google account (or whatever web service) Cannot be a difficult to administer VPN infrastructure Cannot cost more than $100 per month (yeah, money is tight) Needs to support up to 25 users We can host our own web applications, but it must be low maintenance solution

    Read the article

  • pfsense single MAC is listed with several IP's in ARP table

    - by Tillebeck
    I have this problem: arp table filling up But I am quite sure that I cannot blame Kaspersky. Scenarie: a user plugs his computer in. He waits and waits but are getting no IP by DHCP. Then he is told there is an IP conflict... He end up assigning himself a static IP to access the net In the ARP table of the router I see: 192.168.24.144 00:16:41:42:3c:9e Lenovo LAN 192.168.24.145 00:16:41:42:3c:9e Lenovo LAN 192.168.24.181 00:16:41:42:3c:9e Lenovo LAN 192.168.24.150 00:16:41:42:3c:9e Lenovo LAN 192.168.24.151 00:16:41:42:3c:9e Lenovo LAN 192.168.24.152 00:16:41:42:3c:9e Lenovo LAN 192.168.24.156 00:16:41:42:3c:9e Lenovo LAN 192.168.24.157 00:16:41:42:3c:9e Lenovo LAN 192.168.24.159 00:16:41:42:3c:9e Lenovo LAN 192.168.24.160 00:16:41:42:3c:9e Lenovo LAN 192.168.24.130 00:16:41:42:3c:9e Lenovo LAN 192.168.24.132 00:16:41:42:3c:9e Lenovo LAN 192.168.24.164 00:16:41:42:3c:9e Lenovo LAN 192.168.24.137 00:16:41:42:3c:9e Lenovo LAN 192.168.24.140 00:16:41:42:3c:9e Lenovo LAN 192.168.24.206 00:16:41:42:3c:9e Lenovo LAN The last .206 is the static address he gave himself. Several users descripe the exact same problem. It started after removing some filters in the switches, så all users are on a LAN and can see each other. Before, when filters blocked access to each others comptuers no one reported this kind of behavior. Any idéeas?

    Read the article

  • Control Panel for MySQL and PostgreSQL server

    - by jfreak53
    I am looking for a control panel for a CentOS system that will allow me to add user's that can control their "own" MySQL and PostgreSQL Databases. I don't want to have to spend the $25 a month for cpanel on a dedicated to do this. Plus cPanel comes with all the rest like webserver and email that I don't need these to have. Basically I want to be able to create users that can create their own databases and only see those that they have created. I want to be able to control their disk space as with most panels. They need to be able to create their own DB users as well. Kloxo won't work as it doesn't natively support PostgreSQL. I tried straight PHPMyAdmin, but it won't let the user's create their own DB's unless they can also see everyone else's. VirtualMin I just can't get to work at all! ha ha I installed it an though it works great for itself if a user signs into Usermin they can see all DB's. If they sign into PHPMyAdmin (which basically means any program that directly connects to MySQL) they see all DB's. If they login to virtualmin then yes, they only see their's. But that won't work. I can't seem to think of another way to do this. I can use webmin and usermin directly but PHPMyAdmin again let's the user's I create either see only one DB and create none, or see all DB's. So that sound's like a permission problem in MySQL and PostgreSQL.

    Read the article

  • Missing Home Folder XP Clients 2008R2 Domain

    - by minamhere
    We just completed a migration from Server 2003 to Server 2008R2. Everything seems to have gone well except that many of our desktops have stopped mapping the Home Folder as set in Active Directory. Other mappings that are defined on individual clients are mapping just fine, these mappings are all on the same file server as the failing Home Folders. Half of the users are on 1 file server and half are on another. Users from both servers are having this problem. I have enabled the Group Policy setting to "Wait for network before logging in". I enabled the policy to "Run Logon Scripts synchronously". There are no errors on the Domain Controller or either File Server. When I enabled Group Policy Preferences as an attempted workaround, I get this error: The user 'V:' preference item in the '<Policy Name>' Group Policy object did not apply because it failed with error code '0x800708ca This network connection does not exist.' This error was suppressed. This seems to indicate that the network connection is not ready by the time Group Policy is processed. But isn't this the point of the "Wait before logging in" and "Run Logon scripts synchronously" settings? Some other background facts: The new Server 2008R2 installation is a Virtual Machine. It is on a new Subnet in a different building from the old server. DNS and DHCP were also migrated from the old DC to this new DC. These Home Folders were all working properly before the migration. Are there new security restrictions/policies in Server 2008R2 that might be causing this? Is there a way to check whether I have an underlying network connectivity issue? Maybe moving the server to the new building is causing a delay/timeout? Any thoughts or ideas on what could be causing this or how I can resolve this? Thanks.

    Read the article

  • Exchange ActiveSync Exception

    - by Dmeglio
    One of the users on my network is having an issue with his iPhone syncing via ActiveSync. Overall it's working, but every now and then he gets a "Synchronization with your iPhone failed for 3 items." I asked him to go into OWA and turn on the Mobile Phone logging. I looked through the logs and this is what stood out to me: SyncCommand_GenerateResponsesXmlNode_AddChange_Exception : Microsoft.Exchange.Data.Storage.PropertyErrorException: Property: [{00062008-0000-0000-c000-000000000046}:0x8501] ReminderMinutesBeforeStartInternal, PropertyErrorCode: NotFound, PropertyErrorDescription: . at Microsoft.Exchange.Data.Storage.PropertyBag.ThrowIfPropertyError(StorePropertyDefinition propertyDefinition, Object propertyValue) at Microsoft.Exchange.Data.Storage.StoreObject.GetProperty(PropertyDefinition propertyDefinition) at Microsoft.Exchange.Data.Storage.MeetingMessage.get_Item(PropertyDefinition propertyDefinition) at Microsoft.Exchange.AirSync.SchemaConverter.XSO.XsoMeetingRequestProperty.get_NestedData() at Microsoft.Exchange.AirSync.SchemaConverter.AirSync.AirSyncMeetingRequestProperty.InternalCopyFrom(IProperty srcProperty) at Microsoft.Exchange.AirSync.SchemaConverter.AirSync.AirSyncProperty.CopyFrom(IProperty srcProperty) at Microsoft.Exchange.AirSync.SchemaConverter.AirSync.AirSyncDataObject.CopyFrom(IProperty srcRootProperty) at Microsoft.Exchange.AirSync.SyncCollection.ConvertServerToClientObject(ISyncItem syncItem, XmlNode airSyncParentNode, SyncOperation changeObject) at Microsoft.Exchange.AirSync.SyncCollection.GenerateCommandsXmlNode(XmlDocument xmlResponse, IAirSyncVersionFactory versionFactory, String deviceType, ProtocolLogger protocolLogger, MailboxLogger mailboxLogger) Does anyone have any idea what might cause this? We have 4 iPhone users connected to our Exchange via ActiveSync. Right now, this seems to be the only user experiencing this issue. I'd appreciate any help anyone can provide. Thanks.

    Read the article

  • Squid, authentication, Outlook Anywhere, Windows 7 and HTTP 1.1 = NIGHTMARE

    - by Massimo
    I'm running a Squid proxy (latest version, 3.1.4) on Linux CentOS 5.4 with Samba 3.5.4, in order to allow authenticated web access for domain users; everything works fine, and even Windows 7 clients are fully supported. Authentication is transparent for domain users, while it is explicitly requested for non-domain ones, and it works if the user can provide valid domain credentials. All nice and good. Then, Outlook Anywhere kicks in and pain and suffering ensue. When Outlook (be it 2007 or 2010, it doesn't matter) runs on Windows XP clients, it connects gracefully through the Squid proxy to its remote Exchange server. When it runs on Windows 7, it doesn't. If the authentication requirement is lifted from the proxy, everything works on Windows 7 too, so the problem is obviously related to NTLM authentication with Squid. Digging more deeply (WireShark), I discovered Outlook Anywhere uses HTTP 1.1 when it runs on Windows 7, while it uses HTTP 1.0 when on Windows XP. And it looks like Squid, even in its latest incarnation, still has some serious troubles handling HTTP 1.1 properly, particularly when SSL and proxy authentication are thrown in the mix. While waiting for Squid to fully and officially support HTTP 1.1 (and it looks like this could take quite a long time), I'm looking for one of the following solutions: Make Squid handle this correctly, if it is at all possible. Identify Outlook Anywhere connections and have Squid not require authentication for them. But it isn't easy: again, the behaviour of Outlook differs when running on Windows XP and Windows 7, and while on Windows XP Outlook sends a really nice user-agent string of "MSRPC", on Windows 7 it doesn't send any (why? WHY?!?). Force Outlook Anywhere to use HTTP 1.0 even when running on Windows 7. And no, this is not as simple as deselecting "use HTTP 1.1" in Internet Explorer, looks like Outlook ignores that setting and chooses on its own which protocol to use. Any other feasible solution which doesn't involve whitelisting specific destination Exchange servers, which is the last-resort solution I'm trying to avoid.

    Read the article

  • Outside VPN traffic not able to ping site-to-site VPN remote site

    - by Siriss
    we have two ASA 5510s running 8.4 in a site-to-site VPN setup. All internal traffic is working smoothly. Site/Subnet A: 192.100.0.0 - local Site/Subnet B: 192.200.0.0 - remote VPN Users: 192.100.40.0 - assigned by ASA When you VPN into the network, all traffic hits Site A, and everything on subnet A is accessible. Site B however, is completely inaccessible for VPN users. All machines on subnet B, the firewall itself, etc... is not reachable by ping or otherwise. I know I am missing a NAT rule, and in 8.2, it was easy as pie to setup using ASDM, but now I can't get it for the life of me as 8.4 apparently made a lot of changes to NAT rules. I am not too comfortable in the ASA command line, but if there is a command I need to add or if you could direct me where I can add this in 8.4 ASDM I would really appreciate it. I have tired NAT Exempt, Static NAT, Static NAT Policies, etc... I think I tried all the options. I also might have my interfaces confused with the new look at feel of ASDM. Thank you much in advance and I hope I have been thorough enough.

    Read the article

  • Need help tuning Mysql and linux server

    - by Newtonx
    We have multi-user application (like MailChimp,Constant Contact) . Each of our customers has it's own contact's list (from 5 to 100.000 contacts). Everything is stored in one BIG database (currently 25G). Since we released our product we have the following data history. 5 years of data history : - users/customers (200+) - contacts (40 million records) - campaigns - campaign_deliveries (73.843.764 records) - campaign_queue ( 8 millions currently ) As we get more users and table records increase our system/web app is getting slower and slower . Some queries takes too long to execute . SCHEMA Table contacts --------------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------------+------------------+------+-----+---------+----------------+ | contact_id | int(10) unsigned | NO | PRI | NULL | auto_increment | | client_id | int(10) unsigned | YES | | NULL | | | name | varchar(60) | YES | | NULL | | | mail | varchar(60) | YES | MUL | NULL | | | verified | int(1) | YES | | 0 | | | owner | int(10) unsigned | NO | MUL | 0 | | | date_created | date | YES | MUL | NULL | | | geolocation | varchar(100) | YES | | NULL | | | ip | varchar(20) | YES | MUL | NULL | | +---------------------+------------------+------+-----+---------+----------------+ Table campaign_deliveries +---------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+------------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | newsletter_id | int(10) unsigned | NO | MUL | 0 | | | contact_id | int(10) unsigned | NO | MUL | 0 | | | sent_date | date | YES | MUL | NULL | | | sent_time | time | YES | MUL | NULL | | | smtp_server | varchar(20) | YES | | NULL | | | owner | int(5) | YES | MUL | NULL | | | ip | varchar(20) | YES | MUL | NULL | | +---------------+------------------+------+-----+---------+----------------+ Table campaign_queue +---------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+------------------+------+-----+---------+----------------+ | queue_id | int(10) unsigned | NO | PRI | NULL | auto_increment | | newsletter_id | int(10) unsigned | NO | MUL | 0 | | | owner | int(10) unsigned | NO | MUL | 0 | | | date_to_send | date | YES | | NULL | | | contact_id | int(11) | NO | MUL | NULL | | | date_created | date | YES | | NULL | | +---------------+------------------+------+-----+---------+----------------+ Slow queries LOG -------------------------------------------- Query_time: 350 Lock_time: 1 Rows_sent: 1 Rows_examined: 971004 SELECT COUNT(*) as total FROM contacts WHERE (contacts.owner = 70 AND contacts.verified = 1); Query_time: 235 Lock_time: 1 Rows_sent: 1 Rows_examined: 4455209 SELECT COUNT(*) as total FROM contacts WHERE (contacts.owner = 2); How can we optimize it ? Queries should take no more than 30 secs to execute? Can we optimize it and keep all data in one BIG database or should we change app's structure and set one single database to each user ? Thanks

    Read the article

  • LDAP ACLs with ldapmodify & .ldif file grand user access only

    - by plaetzchen
    I want to change the settings my new LDAP server let only users of the server read entries and not anonymous. Currently my olcAccess looks like this: olcAccess: {0} to attrs=userPassword,shadowLastChange by self write by anonymous auth by dn="cn=admin,dc=example,dc=com" write by * none olcAccess: {1} to * by self write by dn="cn=admin,dc=example,dc=com" write by * read I tried to change it like so: olcAccess: {0}to attrs=userPassword,shadowLastChange by self write by anonymous auth by dn="cn=admin,dc=example,dc=com" write by * none olcAccess: {1} to * by self write by dn="cn=admin,dc=exampme,dc=com" write by users read But that gives me no access at all. Can someone help me on this? thanks UPDATE: This is the log read after the changes mentioned by userxxx Sep 30 10:47:21 j16354 slapd[11805]: conn=1437 fd=28 ACCEPT from IP=87.149.169.6:64121 (IP=0.0.0.0:389) Sep 30 10:47:21 j16354 slapd[11805]: conn=1437 op=0 do_bind: invalid dn (pbrechler) Sep 30 10:47:21 j16354 slapd[11805]: conn=1437 op=0 RESULT tag=97 err=34 text=invalid DN Sep 30 10:47:21 j16354 slapd[11805]: conn=1437 op=1 UNBIND Sep 30 10:47:21 j16354 slapd[11805]: conn=1437 fd=28 closed Sep 30 10:47:21 j16354 slapd[11805]: conn=1438 fd=28 ACCEPT from IP=87.149.169.6:64122 (IP=0.0.0.0:389) Sep 30 10:47:21 j16354 slapd[11805]: conn=1438 op=0 do_bind: invalid dn (pbrechler) Sep 30 10:47:21 j16354 slapd[11805]: conn=1438 op=0 RESULT tag=97 err=34 text=invalid DN Sep 30 10:47:21 j16354 slapd[11805]: conn=1438 op=1 UNBIND Sep 30 10:47:21 j16354 slapd[11805]: conn=1438 fd=28 closed pbrechler should be a valid user but has no system user (we don't need it) admin does't work also List item

    Read the article

  • Port forwarding for samba

    - by EternallyGreen
    Alright, here's the setup: Internet - Modem - WRT54G - hubs - winxp workstations & linux smb server. Its basically a home-style distributed internet connection setup, except its at a school. What I want is remote, offsite smb access. I figured I'd need to find out which ports need forwarding and then forward them to the server on the router. I'm told in another question on SF that multiple ports will need forwarding, and it gets somewhat complicated. One of the things I need to know is which ports require forwarding for this, and what complications or vulnerabilities could arise from this. Any additional information you think I should have before doing this would be great. I'm told SMB doesn't support encryption, which is fine. Given I set up authentication/access control, all this means is that once one of my users authenticates and starts downloading data, the unencrypted traffic could be intercepted and read by a MITM, correct? Given that that's the only problem arising from lack of encryption, this is of no concern to me. I suppose that it could also mean a MITM injecting false data into the data stream, eg: user requests file A, MITM intercepts and replaces the contents of file A with some false data. This isn't really an issue either, because my users would know that something was wrong, and its not likely anyone would have incentive to do this anyway. Another thing I've been informed of is Microsoft's poor implementation of SMB, and its crap track record for security. Does this apply if only the client-end is MS? My server is linux.

    Read the article

  • Managing Apache to Compensate for WebDAV's Security Masking

    - by Tohuw
    When a user creates a file via WebDAV, the default behavior is that the file is owned by the user and group running the Apache process, with a umask of 022. Unfortunately, this makes it impossible for unprivileged users to write to the files by other means without being a member of the group Apache runs under (which strikes me as a particularly bad idea). My current solution is to set umask 000 in Apache's envvars and remove all world permissions from the webdav parent directory for the user. So, if the WebDAV share is /home/foo/www, then /home/foo/www is owned by www-data:foo with permissions of 770. This keeps other unprivileged users out, more or less, but it's hokey at best and a security disaster awaiting at worst. From my research and poking around at mod_dav and Apache, I cannot find a reasonable solution short of a cron job flipping all the permissions back (I'd rather not have the load and increased complexity on the server). SuExec won't work, either, because WebDAV operations are not going to execute as a different user. Any thoughts on this? Thank you.

    Read the article

  • Setting Up VirtualHosts for a local RubyOnRails Application

    - by chris Frisina
    I want to set up a VirtualHost so that when I type the project name in the address bar, it goes to the home page of the project. httpd-vhosts.conf files in both XAMPP configuration and apache configuration: <VirtualHost project> ServerAdmin [email protected] ServerName project DocumentRoot /Users/path/to/project/public <Directory "/Users/path/to/project/public"> Options Includes FollowSymLinks AllowOverride All Order deny,allow Allow from all </Directory> RewriteEngine On RewriteOptions inherit </VirtualHost> I have also tried with the path directly to the project folder, and not the public folder of the project. the httpd.conf of XAMPP and apache : # Virtual hosts Include /Applications/XAMPP/etc/extra/httpd-vhosts.conf the /etc/hosts file: 127.0.0.1 other1 127.0.0.1 other2 127.0.0.1 project I have tried : 127.0.0.1 project 127.0.0.1:3000 project 127.0.0.1 project:3000 I have also restarted the rails server and the XAMPP server many times after changes. I have it working so that the project:3000/ works, but how do I get it so that I dont have to specify the port number? Notes: All other VHosts are working well. Rails 3.2.8 (willing to change) Ruby 1.9.3 WEBrick server

    Read the article

  • Windows xp mapped drives disconnect from Windows 7 on Peer to Peer network

    - by Kathryn Codo
    I have a 5 user peer to peer network. 2-XP machines, 3- Windows 7 machines and a Windows 7 machine acting as the file server. I have NO issues with the Windows 7 machines staying connected via the mapped drives to the "server". However, once a day the 2 XP machines will not connect to the Windows 7 machine "server" via the mapped drives. I must restart the "server" in order to see the server and access the files via the mapped drives or Explorer. I have tried the persistent : yes command in NET USE, on the XP machines and also setting a static IP address on the "server". I've turned the firewall off on the "server:, no difference so I turned the firewall back on. No interruption occurs with the internet when this happens, just can't see the server. Again, the Windows 7 users are unaffected. I have gone into the advanced network settings on the "server" and added read/write permissions for Everyone as well as the users of the XP machines. I have double checked that we are all on the same WORKGROUP. I'm at a loss. It seems to happen around mid-day and I've not been able to find any activity that is happening at this time (Like someone plugging in a thumbdrive). Any suggestions would be greatly appreciated as my client is running old XP software he no longer has the disks for and needs for his engineering firm.

    Read the article

  • snort analysis of wireshark capture

    - by Ben Voigt
    I'm trying to identify trouble users on our network. ntop identifies high traffic and high connection users, but malware doesn't always need high bandwidth to really mess things up. So I am trying to do offline analysis with snort (don't want to burden the router with inline analysis of 20 Mbps traffic). Apparently snort provides a -r option for this purpose, but I can't get the analysis to run. The analysis system is gentoo, amd64, in case that makes any difference. I've already used oinkmaster to download the latest IDS signatures. But when I try to run snort, I keep getting the following error: % snort -V ,,_ -*> Snort! <*- o" )~ Version 2.9.0.3 IPv6 GRE (Build 98) x86_64-linux '''' By Martin Roesch & The Snort Team: http://www.snort.org/snort/snort-team Copyright (C) 1998-2010 Sourcefire, Inc., et al. Using libpcap version 1.1.1 Using PCRE version: 8.11 2010-12-10 Using ZLIB version: 1.2.5 %> snort -v -r jan21-for-snort.cap -c /etc/snort/snort.conf -l ~/snortlog/ (snip) 273 out of 1024 flowbits in use. [ Port Based Pattern Matching Memory ] +- [ Aho-Corasick Summary ] ------------------------------------- | Storage Format : Full-Q | Finite Automaton : DFA | Alphabet Size : 256 Chars | Sizeof State : Variable (1,2,4 bytes) | Instances : 314 | 1 byte states : 304 | 2 byte states : 10 | 4 byte states : 0 | Characters : 69371 | States : 58631 | Transitions : 3471623 | State Density : 23.1% | Patterns : 3020 | Match States : 2934 | Memory (MB) : 29.66 | Patterns : 0.36 | Match Lists : 0.77 | DFA | 1 byte states : 1.37 | 2 byte states : 26.59 | 4 byte states : 0.00 +---------------------------------------------------------------- [ Number of patterns truncated to 20 bytes: 563 ] ERROR: Can't find pcap DAQ! Fatal Error, Quitting.. net-libs/daq is installed, but I don't even want to capture traffic, I just want to process the capture file. What configuration options should I be setting/unsetting in order to do offline analysis instead of real-time capture?

    Read the article

  • Openfire on EC2 with Jingle

    - by Bjorn Roche
    I would like to run Openfire (or another XMPP server) on EC2. At the moment this is just for testing, so easy setup and configuration are important, as is low cost. At some point, however, if things go well, it will be important to scale this. Ideally, it would be nice to not have to switch software when the scaling happens, but if a switch needs to happen later it certainly can. My requirements are: basic XMPP services, including muc and pubsub. Logins controlled from an external API. Preferably, when a user attempts to connect, the XMPP server checks with the api to see if their username and password are correct, but I can also have the API keep the XMPP server up to date on new users, deleted users, pasword changes and so on. I see Openfire has a "user service" API. Not ideal, but it looks workable. Jingle, including relay and STUN. It's not at all clear to me if the Jingle Nodes plugin takes care of this. I'm a bit confused about what's required to set this up, and I'd rather know in advance than be confused along the way :). eg It seems like STUN servers require more than one IP address. Can Openfire do all this for me, including stun and media relay on a single machine? Is this hard to configure on EC2 with Openfire? What are the basic steps? Would this be easier with something else like, say Tigase? What about database? Should I use amazon's database service, or run a db on the same machine? Would the server be compatible with a service like http://www.siteuptime.com/ Thanks!

    Read the article

  • Domain authentication over OPEN wireless pre-logon (Windows 7 Pro) - No logon servers avail

    - by Shadow00Caster
    I have a plethora of laptops that are joined to an AD domain. I have an enterprise wireless system setup, the users of these laptops will be using an OPEN unsecured SSID which will ultimately have a captive portal that uses Radius-AD auth and firewall rules to allow access pre-captive portal auth to the proper ip's/ports of DC's etc for auth etc. I already have other laptops/users connecting to another SSID with 802.11x and SSO, all works perfectly pre-logon etc. My problem is with this open network, for some reason I cannot get the machines to auth to AD. The laptops connect to the wireless network, I confirm this on the controller and can ping the laptop at startup. I sharked the wires on the 2 DC's that these machines auth to, I can see a DNS SOA update from a laptop im testing with and can ping that test laptop from both DC's. When I try to logon, "There are currently no logon servers available to service the logon request." The shark shows no incoming connections to either DC even though the laptop is connected and pingable. Any help is greatly appreciated.

    Read the article

  • File sharing for small, distributed, non-technical, non-profit organization?

    - by mnmldave
    Problem: I've started volunteering for a small non-profit with fewer than five non-technical Windows users who need to share 20-30GB of files (Office documents, images, PDFs, etc.) amongst themselves online. Background: The users are accustomed to a Windows network share on a machine that backed up their data locally. An on-site "disaster" has forced them to work from their homes for awhile and to re-evaluate their file sharing needs (office was located in an old building with obvious electrical issues, etc.). Access to time from volunteers with IT experience seems to be difficult. Demonstrably minimizing energy consumption is a nice-to-have. I'm currently considering Jungle Disk (a Desktop account shared amongst the handful of employees since their TOS and my inquiries to their helpdesk seem to indicate this is permissible). It appears easy-to-use, inexpensive, secure, has backup functionality, and can scale to accomodate more data when needed. I've not used it myself though (have only used Dropbox for personal use) and systems isn't my area of expertise, so am worried I might be jumping on a bandwagon. That said, any suggestions, thoughts or similar experiences would be really appreciated.

    Read the article

  • XAMPP CURL not working!

    - by MiffTheFox
    Nope, it's not. Windows Vista Home Premium x32: Relevant section of php.ini: ; Windows Extensions ; Note that ODBC support is built in, so no dll is needed for it. ; Note that many DLL files are located in the extensions/ (PHP 4) ext/ (PHP 5) ; extension folders as well as the separate PECL DLL download (PHP 5). ; Be sure to appropriately set the extension_dir directive. ;extension=php_apc.dll ;extension=php_apd.dll ;extension=php_bcompiler.dll ;extension=php_bitset.dll ;extension=php_blenc.dll ;extension=php_bz2.dll ;extension=php_bz2_filter.dll ;extension=php_classkit.dll ;extension=php_cpdf.dll ;extension=php_crack.dll extension=php_curl.dll ;extension=php_cvsclient.dll ;extension=php_db.dll ;extension=php_dba.dll ;extension=php_dbase.dll ;extension=php_dbx.dll Proof it's not working: c:\users\miff>curl http://localhost/xampp/phpinfo.php | grep curl % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 62592 0 62592 0 0 1971k 0 --:--:-- --:--:-- --:--:-- 3319k <tr><td class="e">HTTP_USER_AGENT </td><td class="v">curl/7.16.3 (i686-pc-cygwin) libcurl/7.16.3 OpenSSL/0.9.8k zlib/1.2.3 libssh2/0.15-CVS </td></tr> <tr><td class="e">User-Agent </td><td class="v">curl/7.16.3 (i686-pc-cygwin) libcurl/7.16.3 OpenSSL/0.9.8k zlib/1.2.3 libssh2/0.15-CVS </td></tr> <tr><td class="e">_SERVER["HTTP_USER_AGENT"]</td><td class="v">curl/7.16.3 (i686-pc-cygwin) libcurl/7.16.3 OpenSSL/0.9.8k zlib/1.2.3 libssh2/0.15-CVS</td></tr> c:\users\miff>

    Read the article

  • (solved) `ssh foo "<command/>"` not loading remote aliases?

    - by TomRoche
    summary: Why does this fail $ ssh foo 'R --version | head -n 1' bash: R: command not found but this succeeds $ ssh foo 'grep -nHe 'bashrc' ~/.bash_profile' /home/me/.bash_profile:3:# source the users .bashrc if it exists /home/me/.bash_profile:4:if [ -f "${HOME}/.bashrc" ] ; then /home/me/.bash_profile:5: source "${HOME}/.bashrc" $ ssh foo 'grep -nHe "\WR\W" ~/.bashrc' /home/me/.bashrc:118:alias R='/share/linux86_64/bin/R' $ ssh foo '/share/linux86_64/bin/R --version | head -n 1' R version 2.14.1 (2011-12-22) ? details: I am a (rootless) user on 2 clusters. One uses environment modules, so any given server on that cluster can provide (via module add) pretty much the same resources. The other cluster, on which I must also unfortunately work, has servers managed individually, so I get in the habit of doing, e.g., EXEC_NAME='whatever' for S in 'foo' 'bar' 'baz' ; do ssh ${SERVER} "${EXEC_NAME} --version" done This works fine for packages installed normally/consistently, but often (for reasons unknown to me) packages are not: e.g. (compare alias below to alias above), $ ssh bar 'R --version | head -n 1' bash: R: command not found $ ssh bar 'grep -nHe 'bashrc' ~/.bash_profile' /home/me/.bash_profile:3:# source the users .bashrc if it exists /home/me/.bash_profile:4:if [ -f "${HOME}/.bashrc" ] ; then /home/me/.bash_profile:5: source "${HOME}/.bashrc" $ ssh bar 'grep -nHe "\WR\W" ~/.bashrc' /home/me/.bashrc:118:alias R='/share/linux/bin/R' $ ssh bar '/share/linux86_64/bin/R --version | head -n 1' R version 2.14.1 (2011-12-22) Using aliases copes well with these install differences when I interactively shell into the server, but fails when I try to script ssh commands (as above); i.e., # interactively $ ssh foo ... foo> R --version calls my alias for R on remote host=foo, but # scripting $ ssh foo 'R --version' doesn't. What do I need to do to make ssh foo "<command/>" load my aliases on the remote host?

    Read the article

  • Why is squid breaking kerberos/NTLM auth?

    - by DonEstefan
    I'm using squid 2.6.22 (Centos 5 Default) as a proxy. Squid seems to break the authentication process for web pages when they require NTLM or Kerberos Auth. I tested with sharepoint 2007 and tried all 3 authentication methods (NTLM, Kerberos, Basic). Accessing the site without squid works in all cases. When I access the same page with squid, then only basic-auth works. Using IE or Firefox desn't make any difference. Squid itself can be used by anybody (no auth_param configured). Its a bit tricky to find solutions online, since most of the topics whirl around auth_param for authenticating users to squid rather than authenticating users to a webpage behind squid. Could anyone help? Edit: Sorry, but my first test was totally screwed up. I tested against the wrong webservers (Memo to myself: always check assumptions before testing). Now I realized that the problem scenario is completely different. Kerberos work for IE Kerberos works for Firefox (after changing "network.negotiate-auth.trusted-uris" in about:config) NTLM works for IE NTLM does NOT work in Firefox (even after changing "network.automatic-ntlm-auth.trusted-uris" in about:config) By the way: The feature that provides NTLM-passthrough in squid is called "connection pinning" and the HTTP header "Proxy-support: Session-based-authentication""

    Read the article

  • Mac Management Without Permission and Security

    - by Bart Silverstrim
    I was going through some literature on managing OS X laptops and asked someone some questions about usage scenarios when using the MacBooks. I asked someone more knowledgeable than I about whether it was possible for my Mac to be taken over if I were visiting another site for a conference or if I went on a wifi network at a local coffee house with policies from an OS X Server with workgroup manager (either legit for the site or someone running a version of OS X Server on hardware they have hidden somewhere on the network), which apparently could be set up to do things like limit my access to Finder or impose other neat whiz-bang management features. He said that it is indeed possible for it to happen as it would be assigned via the DHCP server and the OS X server would assume my Mac is a guest and could hand out restrictions and apparently my Mac will happily accept them without notifying me or giving me an option, unlike Windows which I believe would need to be joined to a domain before it becomes "managed" by Active Directory. So my question is as network admins and sysadmins with users traveling with MacBooks, is there a way to reasonably protect your users from having their machines hijacked without resorting to just turning off networking all the time? Or isn't this much of a security hazard? What threat does this pose to the road warriors in your businesses?

    Read the article

< Previous Page | 309 310 311 312 313 314 315 316 317 318 319 320  | Next Page >