Search Results

Search found 5295 results on 212 pages for 'transaction scope'.

Page 160/212 | < Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >

  • Postfix: LDAP not working (warning: dict_ldap_lookup: Search base not found: 32: No such object)

    - by Heinzi
    I set up LDAP access with postfix. ldapsearch -D "cn=postfix,ou=users,ou=system,[domain]" -w postfix -b "ou=users,ou=people,[domain]" -s sub "(&(objectclass=inetOrgPerson)(mail=[mailaddr]))" delivers the correct entry. The LDAP config file looks like root@server2:/etc/postfix/ldap# cat mailbox_maps.cf server_host = localhost search_base = ou=users,ou=people,[domain] scope = sub bind = yes bind_dn = cn=postfix,ou=users,ou=system,[domain] bind_pw = postfix query_filter = (&(objectclass=inetOrgPerson)(mail=%s)) result_attribute = uid debug_level = 2 The bind_dn and bind_pw should be the same as I used above with ldapsearch. Nevertheless, calling postmap doesn't work: root@server2:/etc/postfix/ldap# postmap -q [mailaddr] ldap:/etc/postfix/ldap/mailbox_maps.cf postmap: warning: dict_ldap_lookup: /etc/postfix/ldap/mailbox_maps.cf: Search base 'ou=users,ou=people,[domain]' not found: 32: No such object If I change LDAP configuration, so that anonymous users have complete access to LDAP olcAccess: {-1}to * by * read then it works: root@server2:/etc/postfix/ldap# postmap -q [mailaddr] ldap:/etc/postfix/ldap/mailbox_maps.cf [user-id] But when I restrict this access to the postfix user: olcAccess: {-1}to * by dn="cn=postfix,ou=users,ou=system,[domain]" read by * break it doesn't work but produces the error printed above (although ldapsearch works, only postmap doesn't). Why doesn't it work when binding with a postfix DN? I think I set up the LDAP ACL for the postfix user correctly, as the ldapsearch command should prove. What can be the reason for this behaviour?

    Read the article

  • uninstall google chrome in fedora

    - by tbleckert
    Yesterday I installed Fedora 15 Beta with GNOME 3 - it works well. One problem though is that I installed Chrome 32-bit (which was wrong, should have been the 64-bit version) and now I can't uninstall it. I can't find it in Add/Remove Software, and I also can't install the correct version of Chrome because it complains about my other copy of Chrome. Any ideas how I can remove the existing copy and get the 64-bit version installed? Here's the message I get when trying to install: Test Transaction Errors: file /etc/cron.daily/google-chrome from install of google-chrome-stable-11.0.696.65-84435.x86_64 conflicts with file from package google-chrome-stable-11.0.696.65-84435.i386 file /opt/google/chrome/chrome from install of google-chrome-stable-11.0.696.65-84435.x86_64 conflicts with file from package google-chrome-stable-11.0.696.65-84435.i386 file /opt/google/chrome/chrome-sandbox from install of google-chrome-stable-11.0.696.65-84435.x86_64 conflicts with file from package google-chrome-stable-11.0.696.65-84435.i386 file /opt/google/chrome/libffmpegsumo.so from install of google-chrome-stable-11.0.696.65-84435.x86_64 conflicts with file from package google-chrome-stable-11.0.696.65-84435.i386 file /opt/google/chrome/libpdf.so from install of google-chrome-stable-11.0.696.65-84435.x86_64 conflicts with file from package google-chrome-stable-11.0.696.65-84435.i386 file /opt/google/chrome/libppGoogleNaClPluginChrome.so from install of google-chrome-stable-11.0.696.65-84435.x8...

    Read the article

  • Unexplained CPU and Disk activity spikes in SQL Server 2005

    - by Philip Goh
    Before I pose my question, please allow me to describe the situation. I have a database server, with a number of tables. Two of the biggest tables contain over 800k rows each. The majority of rows are less than 10k in size, though roughly 1 in 100 rows will be 1 MB but <4 MB. So out of the 1.6 million rows, about 16000 of them will be these large rows. The reason they are this big is because we're storing zip files binary blobs in the database, but I'm digressing. We have a service that runs constantly in the background, trimming 10 rows from each of these 2 tables. In the performance monitor graph above, these are the little bumps (red for CPU, green for disk queue). Once ever minute we get a large spike of CPU activity together with a jump in disk activity, indicated by the red arrow in the screenshot. I've run the SQL Server profiler, and there is nothing that jumps out as a candidate that would explain this spike. My suspicion is that this spike occurs when one of the large rows gets deleted. I've fed the results of the profiler into the tuning wizard, and I get no optimisation recommendations (i.e. I assume this means my database is indexed correctly for my current workload). I'm not overly worried as the server is coping fine in all circumstances, even under peak load. However, I would like to know if there is anything else I can do to find out what is causing this spike? Update: After investigating this some more, the CPU and disk usage spike was down to SQL server's automatic checkpoint. The database uses the simple recovery model, and this truncates the log file at each checkpoint. We can see this demonstrated in the following graph. As described on MSDN, the checkpoints will occur when the transaction log becomes 70% full and we are using the simple recovery model. This has been enlightening and I've definitely learned something!

    Read the article

  • Bind9 configured to start at boot, has to be started manually

    - by antik
    I've configured bind9 on my system and it works great when it runs. It's currently configured to be run at runlevel 2 by setting: $ sudo update-rc.d bind9 enable 2 This appears to have done its work: $ tree -f /etc/rc?.d | grep -e ".*bind9$" |-- /etc/rc0.d/K85bind9 -> ../init.d/bind9 |-- /etc/rc2.d/S15bind9 -> ../init.d/bind9 |-- /etc/rc3.d/S15bind9 -> ../init.d/bind9 |-- /etc/rc4.d/S15bind9 -> ../init.d/bind9 |-- /etc/rc5.d/S15bind9 -> ../init.d/bind9 |-- /etc/rc6.d/K85bind9 -> ../init.d/bind9 Booting the system, I believe I am at runlevel 2: $ runlevel N 2 Given the above configuration, when the system is rebooted, bind does not come up. Only on occasion, for some reason, can I resolve hostnames immediately after startup. Far more often than not however, I cannot. I can interrogate the service's status: $ sudo /etc/init.d/bind9 status * could not access PID file for bind9 When the service doesn't start, I can start it successfully via a terminal by issuing $ sudo /etc/init.d/bind9 start And it works great from then on. Loopback configuration: $ ifconfig lo lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:1872 errors:0 dropped:0 overruns:0 frame:0 TX packets:1872 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:220205 (220.2 KB) TX bytes:220205 (220.2 KB) Do I have my startup misconfigured? (I'm used to Gentoo so Ubuntu's model is still a little new to me) I'm not seeing any log indication of a failed attempt to start at boot in syslog. Is there someplace else I should be looking? What else should I look into to get bind working at startup?

    Read the article

  • PostgreSQL update problems with uuid

    - by kdl
    Hi, I am trying to run yum update on my CentOS 5.2 box and keep getting this message: Missing Dependency: libossp-uuid.so.15 is needed by package postgresql-contrib I ran yum update postgresql separately and now it's 8.3.8. I also downloaded uuid-1.6.2 and built it from source, but I still get the same result. yum update -d6 uuid gives me this at the end: --> Running transaction check ---> Package uuid.i386 0:1.6.1-3.el5.kb set to be updated Checking deps for uuid.i386 0-1.6.1-3.el5.kb - u Checking deps for uuid.i386 0-1.5.1-4.rhel5 - None postgresql-contrib requires: libossp-uuid.so.15 --> Processing Dependency: libossp-uuid.so.15 for package: postgresql-contrib Needed Require is not a package name. Looking up: libossp-uuid.so.15 Potential Provider: uuid.i386 0:1.5.1-4.rhel5 Mode is u for provider of libossp-uuid.so.15: uuid.i386 0:1.5.1-4.rhel5 Mode for pkg providing libossp-uuid.so.15: u Cannot find an update path for dep for: libossp-uuid.so.15 Searching pkgSack for dep: libossp-uuid.so.15 Potential match for libossp-uuid.so.15 from uuid - 1.5.1-4.rhel5.i386 Matched uuid - 1.5.1-4.rhel5.i386 to require for libossp-uuid.so.15 uuid - 1.5.1-4.rhel5.i386 is in providing packages but it is already installed, removing. --> Finished Dependency Resolution Dependency Process ending Error: Missing Dependency: libossp-uuid.so.15 is needed by package postgresql-contrib How can I resolve this situation? Thanks

    Read the article

  • windows 8.1 upgrade fails with error code 0xc1900101-0x20017

    - by cmorse
    I just tried to install windows 8.1 on my laptop, but it fails to install with the message: Sorry we couldn't complete the update to Windows 8.1. We restored your previous version of Windows to this PC 0xC1900101 - 0x20017 It looks like on the first boot that the laptop is going to the PC restore screen (it asks what kind of keyboard I have, and then what repair optins I would like to take). Thus far I have just been selecting "Continue to windows 8." I'm running a Lenovo x220i tablet. I've got 43GB of free disk space. It installed just fine on my desktop. The primary difference between the two machines is that the desktop has media center install, and isn't using TrueCrypt. Full WindowsUpdate.log http://pastebin.com/hGmAW4Q1 Most important portion of WindowsUpdate.log: 2013-10-17 10:41:06:671 964 694 Agent ************* 2013-10-17 10:41:06:671 964 694 Agent ** START ** Agent: Finding updates [CallerId = AutomaticUpdates] 2013-10-17 10:41:06:671 964 694 Agent ********* 2013-10-17 10:41:06:671 964 694 Agent * Online = No; Ignore download priority = No 2013-10-17 10:41:06:671 964 694 Agent * Criteria = "IsInstalled=0 and DeploymentAction='Installation' or IsPresent=1 and DeploymentAction='Uninstallation' or IsInstalled=1 and DeploymentAction='Installation' and RebootRequired=1 or IsInstalled=0 and DeploymentAction='Uninstallation' and RebootRequired=1" 2013-10-17 10:41:06:671 964 694 Agent * ServiceID = {7971F918-A847-4430-9279-4A52D1EFE18D} Third party service 2013-10-17 10:41:06:671 964 694 Agent * Search Scope = {Machine & All Users} 2013-10-17 10:41:06:671 964 694 Agent * Caller SID for Applicability: S-1-5-18 2013-10-17 10:41:07:233 964 870 Report REPORT EVENT: {AD47FBDC-F7F9-4E7F-BAF4-DBA3784C7101} 2013-10-17 10:41:06:436-0600 1 202 [AU_REBOOT_COMPLETED] 102 {00000000-0000-0000-0000-000000000000} 0 0 AutomaticUpdates Success Content Install Reboot completed. 2013-10-17 10:41:07:233 964 870 Report REPORT EVENT: {8D4E7A67-9526-4702-A897-5BE5F97497AF} 2013-10-17 10:41:06:639-0600 1 204 [AGENT_INSTALLING_FAILED_POST_REBOOT] 101 {8951E70D-4332-4F7C-B92D-D9362E384959} 1 c1900101 WSAcquisition Failure Content Install Installation Failure Post Reboot. 2013-10-17 10:41:07:249 964 870 Report CWERReporter::HandleEvents - WER report upload completed with status 0x8 2013-10-17 10:41:07:249 964 870 Report WER Report sent: 7.8.9200.16715 0xc1900101(0x20017) 8951E70D-4332-4F7C-B92D-D9362E384959 Install 101 Unmanaged 2013-10-17 10:41:07:249 964 870 Report CWERReporter finishing event handling. (00000000)

    Read the article

  • Outgrew MongoDB … now what?

    - by samsmith
    We dump debug and transaction logs into mongodb. We really like mongodb because: Blazing insert perf document oriented Ability to let the engine drop inserts when needed for performance But there is this big problem with mongodb: The index must fit in physical RAM. In practice, this limits us to 80-150gb of raw data (we currently run on a system with 16gb RAM). Sooooo, for us to have 500gb or a tb of data, we would need 50gb or 80gb of RAM. Yes, I know this is possible. We can add servers and use mongo sharding. We can buy a special server box that can take 100 or 200 gb of RAM, but this is the tail wagging the dog! We could spend boucoup $$$ on hardware to run FOSS, when SQL Server Express can handle WAY more data on WAY less hardware than Mongo (SQL Server does not meet our architectural desires, or we would use it!) We are not going to spend huge $ on hardware here, because it is necessary only because of the Mongo architecture, not because of the inherent processing/storage needs. (And sharding? Please! Cost aside, who needs the ongoing complexity of three, five, or more servers to manage a relatively small load?) Bottom line: MongoDB is FOSS, but we gotta spend $$$$$$$ on hardware to run it? We sould rather buy commercial SW! I am sure we are not the first to hit this issue, so we ask the community: Where do we go next? (We already run Mongo v2) Thanks!!

    Read the article

  • python mysqldb - mysql server gone away - can't reconnect

    - by david.barkhuizen
    Hi Folks, When attempting to import a bunch of data into mysql tables using python and mysqldb, I run into the following error '2006 - mySQL Server has gone away', and then I am unable to reconnect again within the script. I am iniitially re-using a connection object across transactions ( delineated by conn.commit() ), then when I first encounter this exception, if I create a new connection by calling MySQLdb.connect(), this new connection also fails with the same exception. This error does not occur immediately, I can pump a fair amount of data into the db, but then faithfully occurs after I have inserted a couple thousand records, so roughly once the db has committed a certain transaction volume, it always falls over like this. If I rerun the script, WITHOUT restarting the db server. then it resumes where it left off, pumps in some data, then falls over again. Before recommendations to change time-out timings, does anyone know why I am not able to establish a new connection after the initial failure ? - Even if I try a couple of times waiting a couple of seconds between each. (btw, I'm running Windows 7, mysql server 5.1.48, mysqldb 1.2.3.gamma.1, python 2.6)

    Read the article

  • SAN performance issues storing SQL Server tempdb on a SAN that's being backed up

    - by user42724
    I'm afraid I don't know much about SAN's so please forgive my lack of detail or technical terms. As a developer I've just completed and put on an existing production system a new application but it would appear to have tipped the scales regarding the performance of the backups being taken from the SAN. As I understand it there's a mirror of the SAN being taken usually constantly at the block-level. However, there seem to be so many new writes to the disk that the SAN mirroring/backup process can no longer keep up. I believe I've narrowed this down to SQL Servers tempdb which exists on a drive that contributes the largest portion of the problem! In fact I think tempdb has be contributing the largest portion of the issues all along regardless of my application! My question therefore is whether the tempdb should ever be mirrored or backed on the SAN and whether anyone else has gone through this sort of pain already? I'm wondering whether it's a best practise to make sure that tempdb is never mirrored on a SAN simply because any writes to it don't need to be saved. This also raises a slightly connected question - is it better to rely on SQL Servers built-in database backups tools (DB in full-recovery mode with full/differential and transaction log backups) or, as is the case with our application, SQL server is in simple recovery mode and never backed up since the SAN is mirrored and backed up? Many thanks

    Read the article

  • can't find port 22 traffic under VirtualBox

    - by telliott99
    I'm trying to learn to use tcpdump. I thought I'd eavesdrop on my ssh login. The setup is a bit unusual, I have OS X Lion running VirtualBox, with Ubuntu running in the VM. I have ssh enabled and can login from OS X normally: > ssh -p 22 10.0.1.2 -l telliott Welcome to Ubuntu 11.10 (GNU/Linux 3.0.0-17-generic i686) * Documentation: https://help.ubuntu.com/ 0 packages can be updated. 0 updates are security updates. Last login: Sat Mar 31 19:54:36 2012 from toms-mac-mini.local telliott@U32:~$ logout Connection to 10.0.1.2 closed. > I have not obfuscated the ssh port on Ubuntu. From OS X, stroke gives what I expect: > ./stroke 10.0.1.2 22 22 Port Scanning host: 10.0.1.2 Open TCP Port: 22 ssh So from OS X I do: > sudo tcpdump -i en1 -v port 22 Password: tcpdump: listening on en1, link-type EN10MB (Ethernet), capture size 65535 bytes Then I login from OS X to Ubuntu using ssh, but I see nothing with tcpdump. Here is ifconfig from Ubuntu: telliott@U32:~$ ifconfig eth1 Link encap:Ethernet HWaddr 08:00:27:d7:ba:0e inet addr:10.0.1.2 Bcast:10.0.1.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fed7:ba0e/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:799 errors:0 dropped:0 overruns:0 frame:0 TX packets:465 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:96863 (96.8 KB) TX bytes:68638 (68.6 KB) Where are the packets I was hoping to see? Thanks for any help.

    Read the article

  • Performance mitigations serving content from a UNC share via IIS 6

    - by codepoke
    I have a quad processor vmware instance running Windows 2003 and 1gb ethernet. I'm comparing serving the exact same heavy .NET 2.0 content from the local hard drive versus serving it from a UNC drive. If I use WCAT to load it down, I see about a 40% reduction in transactions/sec while serving from the UNC. Processor time barely moves from 45% and the NIC sits around 40% either way. I don't see any significant memory loading either way. Context Switches/Transaction, though, more than doubles when serving from the UNC. Pathlengths more than double as well, but I believe that's just an expression of the effect of context switching. All told, it looks like the bottleneck is processor switching while waiting on content from the UNC share. Is my experience about the norm? Is there some mitigation I might try? I twiddled HKLM\System\CurrentControlSet\Services\LanmanWorkstation\Parameters\MaxCmds a little bit per http://technet.microsoft.com/en-us/library/dd296694(WS.10).aspx, but to no obvious effect. I kind of doubt my problem is lack of connections, but rather just the act of switching from thread to thread while waiting on data.

    Read the article

  • How can I make Mac OS X Address Book display a person’s home address from an LDAP server?

    - by Arcturus
    Hi, (I've posted this question on Stack Overflow first, but someone told me it belonged here.) I have a custom LDAP server, which I can customize to generate whichever object class and attributes I need. I'm trying to display people from that server in the Mac OS X address book. Names and organizations display correctly, as well as work-related phone and address. However, I've never been able to have a home address displayed in the address book. This is an example of output from running a ldapsearch: # extended LDIF # # LDAPv3 # base <dc=example,dc=com> with scope subtree # filter: (givenName=Joh*) # requesting: ALL # # 10041, example.com dn: uid=10041,dc=example,dc=com objectclass: top objectclass: person objectclass: organizationalPerson objectclass: inetOrgPerson objectclass: mozillaOrgPerson uid: 10041 cn: John Doe givenName: John sn: Doe o: Acme telephoneNumber: 500 00 00 mobile: 500 00 00 mail: [email protected] street: Baker St postalCode: 10098 l: New York c: US homePostalAddress: White St mozillaHomePostalCode: 10098 mozillaHomeLocalityName: New York mozillaHomeCountryName: US # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 Every piece of information shows up in the address book up to here: homePostalAddress: White St mozillaHomePostalCode: 10098 mozillaHomeLocalityName: New York mozillaHomeCountryName: US Which object class or attribute name should I use to have the home address show up in the Mac OS X address book?

    Read the article

  • Unable to access Windows share

    - by mbnoimi
    I've installed Alfresco 4.2.d under Ubuntu 12.04 LTS; Everything done fine except I can't access it from Windows share although I got the link from Alfresco explorer which is: file:///%5C%5CECSA%5CAlfresco%5CSites%5Cswsdp%5CdocumentLibrary%5CAgency%20Files%5CImages%5Ccoins.JPG I tried to access it from: \\ECSA but I failed too so I made a ping (192.168.0.70 is server IP) then I got: C:\Users\user>ping 192.168.0.70 Pinging 192.168.0.70 with 32 bytes of data: Reply from 192.168.0.70: bytes=32 time<1ms TTL=64 Reply from 192.168.0.70: bytes=32 time<1ms TTL=64 Reply from 192.168.0.70: bytes=32 time<1ms TTL=64 Reply from 192.168.0.70: bytes=32 time<1ms TTL=64 Ping statistics for 192.168.0.70: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms C:\Users\user>ping ECSA Ping request could not find host ECSA. Please check the name and try C:\Users\user> Some logs of what's going on: C:\Users\user>net view ECSA System error 1707 has occurred. The network address is invalid. C:\Users\user>nbtstat -a 192.168.0.70 Local Area Connection: Node IpAddress: [192.168.0.84] Scope Id: [] NetBIOS Remote Machine Name Table Name Type Status --------------------------------------------- ECSA <20> UNIQUE Registered ECSA <00> UNIQUE Registered WORKGROUP <00> GROUP Registered MAC Address = 00-00-00-00-00-00 C:\Users\user> CIFS Server Configuration in file-servers.properties ### CIFS Server Configuration - file-servers.properties ### cifs.enabled=true cifs.serverName=${localname}A cifs.domain= cifs.broadcast=255.255.255.255 cifs.bindto=192.168.0.70 cifs.ipv6.enabled=false cifs.hostannounce=true cifs.disableNIO=false cifs.disableNativeCode=false cifs.sessionTimeout=900 cifs.maximumVirtualCircuitsPerSession=16 cifs.tcpipSMB.port=445 cifs.netBIOSSMB.sessionPort=139 cifs.netBIOSSMB.namePort=137 cifs.netBIOSSMB.datagramPort=138 cifs.WINS.autoDetectEnabled=true cifs.WINS.primary=192.168.0.70 cifs.WINS.secondary=192.168.0.1 cifs.sessionDebug= cifs.pseudoFiles.enabled=true cifs.pseudoFiles.explorerURL.enabled=true cifs.pseudoFiles.explorerURL.fileName=__Alfresco.url cifs.pseudoFiles.shareURL.enabled=false cifs.pseudoFiles.shareURL.fileName=__Share.url How can I fix this issue?

    Read the article

  • Linux on Sony Vaio VPCEB1S1E

    - by Jaakko
    I bought Sony Vaio VPCEB1S1E and I was able to surf on net. Then I tried to install Ubuntu 9.04 and Linux Mint on it but neither allows me an access to the Internet. How can I configure Mint so that I can go to net and get updates via apt-get? jaakko@jaakko-laptop ~ $ ifconfig -a lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:12 errors:0 dropped:0 overruns:0 frame:0 TX packets:12 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:720 (720.0 B) TX bytes:720 (720.0 B) pan0 Link encap:Ethernet HWaddr 46:83:d4:f4:36:bc BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) wlan0 Link encap:Ethernet HWaddr 78:dd:08:c5:61:88 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) wmaster0 Link encap:UNSPEC HWaddr 78-DD-08-C5-61-88-00-00-00-00-00-00-00-00-00-00 UP RUNNING MTU:0 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) jaakko@jaakko-laptop ~ $ ping 8.8.8.8 connect: Network is unreachable

    Read the article

  • CENTOS 6 - How to install php-mysql when php-common @remi is present?

    - by Multitut
    I am having troubles adding mysql support for my php installation, this installation was made using a ready to use-package that came with our VPS. This is my php.info: http://snake.quetzalcoatech.com/info.php I am trying to install php mysql using: yum install php-mysql And get this output: Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirrors.serveraxis.net * extras: mirror.fdcservers.net * updates: bay.uchicago.edu Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package php-mysql.x86_64 0:5.3.3-14.el6_3 will be installed --> Processing Dependency: php-common = 5.3.3-14.el6_3 for package: php-mysql-5.3.3-14.el6_3.x86_64 --> Finished Dependency Resolution Error: Package: php-mysql-5.3.3-14.el6_3.x86_64 (updates) Requires: php-common = 5.3.3-14.el6_3 Installed: php-common-5.3.17-2.el6.remi.x86_64 (@remi) php-common = 5.3.17-2.el6.remi Available: php-common-5.3.3-3.el6_2.8.x86_64 (base) php-common = 5.3.3-3.el6_2.8 Available: php-common-5.3.3-14.el6_3.x86_64 (updates) php-common = 5.3.3-14.el6_3 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest I am a noob using Linux, so could you tell me which command should I use to install a compatible php-mysql module? Thank you so much!

    Read the article

  • Whats the difference between local and remote addresses in 2008 firewall address

    - by Ian
    In the firewall advanced security manager/Inbound rules/rule property/scope tab you have two sections to specify local ip addresses and remote ip addresses. What makes an address qualify as a local or remote address and what difference does it make? This question is pretty obvious with a normal setup, but now that I'm setting up a remote virtualized server I'm not quite sure. What I've got is a physical host with two interfaces. The physical host uses interface 1 with a public IP. The virtualized machine is connected interface 2 with a public ip. I have a virtual subnet between the two - 192.168.123.0 When editing the firewall rule, if I place 192.168.123.0/24 in the local ip address area or remote ip address area what does windows do differently? Does it do anything differently? The reason I ask this is that I'm having problems getting the domain communication working between the two with the firewall active. I have plenty of experience with firewalls so I know what I want to do, but the logic of what is going on here escapes me and these rules are tedious to have to edit one by one. Ian

    Read the article

  • How can MySQL be in GDAL's dependencies when it's already installed?

    - by Julien Fouilhé
    I'm trying to install GDAL on my CentOS 64 bits server to be able to make some GIS operations. I tried a simple: # yum install gdal First, the GDAL version is 1.4 (the last released one is 1.9) Then, I see in the dependencies list mysql. But I have mysql already installed, from another repository (remi), with a newer version than the one suggested by yum... Is it a problem of architecture (yum suggests i386)? I risked a yes, but still impossible to install it! Here's the error I have. Transaction Check Error: package mysql-5.5.28-1.el5.remi.x86_64 (which is newer than mysql-5.0.95-1.el5_7.1.i386) is already installed Then, I tried to install it from sources with last version available (1.9.2). I downloaded the GDAL tar.gz, extracted the files and installed it like following: # tar -xzf gdal-1.9.2.tar.gz # ./configure --with-static-proj4=/usr/local/lib --with-threads --with-libtiff=internal --with-geotiff=internal --with-jpeg=internal --with-gif=internal --with-png=internal --with-libz=internal # make # make install But during the make, I have some strange errors displaying, about RegisterOGRMySQL, that I can't understand: chmod a+x gdal-config /bin/sh /home/benjamin/gdal-1.9.2/libtool --mode=link g++ gdalinfo.lo /home/benjamin/gdal-1.9.2/libgdal.la -o gdalinfo libtool: link: g++ .libs/gdalinfo.o -o .libs/gdalinfo /home/benjamin/gdal-1.9.2/.libs/libgdal.so -L/usr/local/lib/lib -L/usr/kerberos/lib64 -lproj -lsqlite3 /usr/lib64/libexpat.so -lpthread -lrt -lcurl -ldl -lgssapi_krb5 -lkrb5 -lk5crypto -lcom_err -lidn -lssl -lcrypto -lz -Wl,-rpath -Wl,/usr/local/lib -Wl,-rpath -Wl,/usr/lib64 /home/benjamin/gdal-1.9.2/.libs/libgdal.so: undefined reference to `RegisterOGRMySQL' collect2: ld returned 1 exit status make[1]: *** [gdalinfo] Error 1 make[1]: Leaving directory `/home/benjamin/gdal-1.9.2/apps' make: *** [apps-target] Error 2 Has anyone a solution? Thanks a lot!

    Read the article

  • Authenticate users with Zimbra LDAP Server from other CentOS clients

    - by efesaid
    I'am wondering that how can integrate my database,web,backup etc.. centos servers with Zimbra LDAP Server. Does it require more advanced configuration than standart ldap authentication ? My zimbra server version is [zimbra@zimbra ~]$ zmcontrol -v Release 8.0.5_GA_5839.RHEL6_64_20130910123908 RHEL6_64 FOSS edition. My LDAP Server status is [zimbra@ldap ~]$ zmcontrol status Host ldap.domain.com ldap Running snmp Running stats Running zmconfigd Running I already installed nss-pam-ldapd packages to my servers. [root@www]# rpm -qa | grep ldap nss-pam-ldapd-0.7.5-18.2.el6_4.x86_64 apr-util-ldap-1.3.9-3.el6_0.1.x86_64 pam_ldap-185-11.el6.x86_64 openldap-2.4.23-32.el6_4.1.x86_64 My /etc/nslcd.conf is [root@www]# tail -n 7 /etc/nslcd.conf uid nslcd gid ldap # This comment prevents repeated auto-migration of settings. uri ldap://ldap.domain.com base dc=domain,dc=com binddn uid=zimbra,cn=admins,cn=zimbra bindpw **pass** ssl no tls_cacertdir /etc/openldap/cacerts When i run [root@www ~]# id username id: username: No such user But i am sure that username user exist on ldap server. EDIT : When i run ldapsearch command i got all result with credentials and dn. [root@www ~]# ldapsearch -H ldap://ldap.domain.com:389 -w **pass** -D uid=zimbra,cn=admins,cn=zimbra -x 'objectclass=*' # extended LDIF # # LDAPv3 # base <dc=domain,dc=com> (default) with scope subtree # filter: objectclass=* # requesting: ALL # # domain.com dn: dc=domain,dc=com zimbraDomainType: local zimbraDomainStatus: active . . .

    Read the article

  • Windows Server 2003 DHCP not handing out IPs

    - by SnOrfus
    I'm trying to setup a home server (to tinker with) as a domain controller. I've setup the domain and I've installed DHCP and setup a scope without any exclusions (with the default range of 192.168.0.1-254). My client machine is a Windows 7 (RC) machine and it has a connection but can't get an IP address. Even if I try setting the IP to a static 192.168.0.2 and there is still no connectivity. I can ping it from the server, but pinging the server from the client just times out. The only thing between the server and the client is a 24 port switch (D-Link DES-1024D). edit Ok, it turned out that the interfaces were setup backwards in the NAT settings (the internal nic connection was set to public and the external nic connection was set to private). I changed this and all was OK.... sort-of. Problem is now: If I set a static ip on the client (where I am typing this from) all is fine. BUT; when I set it to get it from DHCP, I get a correct IP from the server (192.168.0.2) but there is no internet on the client; but I can still ping the server fine from the client (which makes sense cause I was able to get an IP from it). edit I ended up just removing the Routing and DHCP server roles and just going with ICS for the time being until I get my hands on some better learning tools.

    Read the article

  • Cisco SA520 to Adtran 1234 no DHCP transfer

    - by Grico
    I am trying to set up a Cisco SA520 to run DHCP on my network. I have a vendor provided switch, the Adtran 1234, and it provides DHCP for our phone systems on VLAN 200. I do not have access to the Adtran, but the vendor gave me a IP on port 1 for WAN and said port 2 should be for the "trust" side should go. I did setup a mini lab where, Adtran 1 went to SA520 WAN port, and SA520 trust 1 went to my laptop. Everything worked fine, I could ping and get internet using the DHCP scope I put on the SA520. I then unplugged my computer from SA520 trust 1 and plugged it into Adtran 2. I plugged my computer into Adtran 23 and I dont get DHCP or even a link light. If I restart my machine, I get a brief link and then it dies once the machine boots. I have tried several ports on the Adtran and none seem to work. Different cables as well. However, when I plug a phone into the Adtran, the phone boot immediately and shows link. Thoughts?

    Read the article

  • Workstations cannot see new MS Server 2008 domain, but can access DHCP.

    - by Radix
    The XP Pro workstations do not see the new replacement domain upon boot; they only see their cached entry for the old (server 2003) domain controller. The old_server is not connected to the network. I have DHCP working with the same scope as the old_server. In my "before-asking" search for a solution I came across the following two articles, and I recall doing things as suggested by the articles. http://www.windowsreference.com/windows-server-2008/how-to-setup-dhcp-server-in-windows-server-2008-step-by-step-guide/ http://www.windowsreference.com/windows-server-2008/step-by-step-guide-for-windows-server-2008-domain-controller-and-dns-server-setup/ The only possible issue is: I was under the impression that the domain netbios needed to match the DC's netbios. The DC netbios is city01 while the domain's FQDN is city.domain.org (I think this is mistaken and should have been just domain.org) But, the second link led me to a post which I believe answers my question. I did as they instructed by opening Local Area Connection Properties, then selecting TCP/IPv4 and setting the sole preferred DNS server to the local hosts static IP (10.10.1.1). Search for "Your problems should clear up" for the post I'm referencing: http://forums.techarena.in/active-directory/1032797.htm Have I misunderstood their instructions? I am hoping to reach the point where I can define users and user groups. Also, does TechNet have a single theoretical overview document I could read. I really don't like treating comps as magic. I will be watching this closely and will quickly answer any questions. If I've left anything out it is because I did not know it was needed. PS: I am loath to ask obviously basic questions, but I am tired and wish to fix this before tomorrow. Also, this is my first server installation, thank you for your help.

    Read the article

  • Windows Server 2003 DHCP not handing out IPs

    - by SnOrfus
    I'm trying to setup a home server (to tinker with) as a domain controller. I've setup the domain and I've installed DHCP and setup a scope without any exclusions (with the default range of 192.168.0.1-254). My client machine is a Windows 7 (RC) machine and it has a connection but can't get an IP address. Even if I try setting the IP to a static 192.168.0.2 and there is still no connectivity. I can ping it from the server, but pinging the server from the client just times out. The only thing between the server and the client is a 24 port switch (D-Link DES-1024D). edit Ok, it turned out that the interfaces were setup backwards in the NAT settings (the internal nic connection was set to public and the external nic connection was set to private). I changed this and all was OK.... sort-of. Problem is now: If I set a static ip on the client (where I am typing this from) all is fine. BUT; when I set it to get it from DHCP, I get a correct IP from the server (192.168.0.2) but there is no internet on the client; but I can still ping the server fine from the client (which makes sense cause I was able to get an IP from it). edit I ended up just removing the Routing and DHCP server roles and just going with ICS for the time being until I get my hands on some better learning tools.

    Read the article

  • how to grep ip from ifconfig output

    - by Registered User
    Following is my ifconfig output eth0 Link encap:Ethernet UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:28 Base address:0x2000 eth1 Link encap:Ethernet inet addr:192.168.1.2 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:36497 errors:0 dropped:0 overruns:0 frame:14515 TX packets:44884 errors:1352 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:20781745 (20.7 MB) TX bytes:17776225 (17.7 MB) Interrupt:17 Base address:0xc000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:12 errors:0 dropped:0 overruns:0 frame:0 TX packets:12 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:720 (720.0 B) TX bytes:720 (720.0 B) virbr0 Link encap:Ethernet inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:24 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:4416 (4.4 KB) vmnet1 Link encap:Ethernet inet addr:192.168.185.1 Bcast:192.168.185.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:24 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) vmnet8 Link encap:Ethernet inet addr:192.168.207.1 Bcast:192.168.207.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:25 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) I want to do some thing grep that I see the IP corresponding to each LAN card? Is that possible? How can it be achieved?

    Read the article

  • Is 30 calls / second a lot for one IIS server?

    - by Lieven Cardoen
    We have a RIA application that 300 clients concurrently use in an intranet environment. Together they make 30 calls / second to IIS (asp.net) (actually it's 60 but calls are loadbalanced over two IIS servers). Half of the calls is getting an asset (Caching Profile is used so most of the time cache is hit), the other half is saving data to a sql server. Retrieving an asset is done with a aspx page. Saving the data happens via WebORB, asp.net and Sql Server. So some processing is needed by WebORB (amf decoding, GZIP, ...). We also use Spring.NET, and some of the container objects have a request scope (not a lot). IIS servers -- Virtual machines, 4 CPU, 2 gb RAM. They are based on Windows 2008 x64 SP2 Enterprise Edition. Sql Server 2008 is used. Apparently CPU of both IIS serers is constantly around 60-70%. Now, my question, is the load of 60-70% acceptable and how could we possible bring that down to less % (maybe using only one IIS server)? + Is 2 gb RAM enough? Assets can be up to 20mb, but on average, they are about 30kb. (the load of 60-70% is achieved with assets around 30kb). The data that gets saved with weborb is very small (2kb) and is just one object.

    Read the article

  • How does QuickBooks handle IIF imports?

    - by dwwilson66
    I've received a 'template' for an IIF file for Quickbooks transactions, and there's like seventy-bazillion fields in there, lots of which I never even user. It's a tab delimited file, with the following lines--field headers for transactions and respective splits for those transactions, followed by an end-of-transaction marker. !TRNS FIELD1 FIELD2 FIELD3 ... FIELD48 !SPL FIELD1 FIELD2 FIELD3 ... FIELD48 !ENDTRNS TRNS FIELD1_DATA FIELD2_DATA FIELD3_DATA ... FIELD48_DATA SPL FIELD1_DATA FIELD2_DATA FIELD3_DATA ... FIELD48_DATA ENDTRNS ... What drives data to a particular field? Is it the field header with corresponding data, or is it the tabular position relative to the head of the line? E.G., Let's say all I have to import is the data in FIELD1, FIELD3 and FIELD5: Would I need by header: !TRNS FIELD1 FIELD3 FIELD5 !SPL FIELD1 FIELD3 FIELD5 !ENDTRNS TRNS FIELD1 FIELD3 FIELD5 SPL FIELD1 FIELD3 FIELD5 ENDTRNS or by tabular position: !TRNS FIELD1 FIELD2 FIELD3 FIELD4 FIELD5 !SPL FIELD1 FIELD2 FIELD3 FIELD4 FIELD5 !ENDTRNS TRNS FIELD1_DATA FIELD2_BLANK FIELD3_DATA FIELD4_BLANK FIELD5_DATA SPL FIELD1_DATA FIELD2_BLANK FIELD3_DATA FIELD4_BLANK FIELD5_DATA ENDTRNS Alternately, if it were a comma delimited input, would I need: DATA1,DATA3,DATA5 or DATA1,,DATA3,,DATA5 Anyone have any experience with what Quickbooks is doing?

    Read the article

< Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >