Search Results

Search found 23613 results on 945 pages for 'query parameters'.

Page 825/945 | < Previous Page | 821 822 823 824 825 826 827 828 829 830 831 832  | Next Page >

  • MYSQL - Multiple set values in one update statement [migrated]

    - by Maurzank
    MYSQL - MULTIPLE SET VALUES IN ONE UPDATE STATEMENT USING 2 TABLES AS REFERENCE AND STORING VALUES IN ONE OF THOSE TABLES WITH A SPECIFIC LOGIC. Hello people, A problem came up by making an UPDATE. The example issue is as follows: CURRENUSRTABLE +------------+-------+ | ID | STATE | +------------+-------+ | 123 | 3 | | 456 | 3 | | 789 | 3 | +------------+-------+ HISTORYTABLE +------------+------------+-----+ | ID | TRDATE | ACT | +------------+------------+-----+ | 123 | 2013-11-01 | 5 | | 456 | 2013-11-01 | 5 | | 789 | 2013-11-01 | 5 | | 123 | 2013-11-02 | 4 | | 456 | 2013-11-02 | 4 | | 789 | 2013-11-02 | 4 | | 123 | 2013-11-03 | 3 | | 456 | 2013-11-03 | 3 | | 789 | 2013-11-03 | 3 | +------------+------------+-----+ I'm using these variables: @BA=3, @DE=5, @BL=4, What I'm trying to do is an update on CURRENUSRTABLE.STATE using HISTORYTABLE.ACT with the following logic: STATE value will be updated as ACT value, except when STATE value is 4 and ACT is 3, then STATE will be 5 I made this statement: UPDATE CURRENUSRTABLE RIGHT OUTER JOIN HISTORYTABLE ON HISTORYTABLE.ID=CURRENUSRTABLE.ID SET CURRENUSRTABLE.STATE= ( SELECT CASE HISTORYTABLE.ACT WHEN @DE THEN @DE WHEN @BL THEN @BL WHEN @BA THEN CASE CURRENUSRTABLE.STATE WHEN @BL THEN @DE ELSE @BA END END ORDER BY HISTORYTABLE.TRDATE,FIELD(HISTORYTABLE.ACT,@DE,@BL,@BA) ) WHERE HISTORYTABLE.TRDATE BETWEEN '2013-11-01' AND '2013-11-01' I'm intentionally using "RIGHT OUTER JOIN" and "HISTORYTABLE.TRDATE BETWEEN" because I'd like to change the values in CURRENUSRTABLE using a timeframe of more than one day. If I execute this statement many times using only one day (i.e. "BETWEEN '2013-11-01' AND '2013-11-01'" and then "BETWEEN '2013-11-02' AND '2013-11-02'"... etc ) it works perfectly, but if it is executed using the dates "BETWEEN '2013-11-01' AND '2013-11-03'" the results on CURRENUSRTABLE.STATE are 3, which is wrong, it should be 5. I think the problem relies on "CASE CURRENUSRTABLE.STATE" when uses "HISTORYTABLE.TRDATE BETWEEN '2013-11-01' AND '2013-11-03'", because it reads the STATE 9 times which has not been commited yet until the statement ends. Query OK, 9 rows affected (0.00 sec) Rows matched: 9 Changed: 9 Warnings: 0 Maybe the solution is very simple, but unfortunately I've not much practice on MySQL since I've worked with it less than 2 months :) Is there any suggestions to solve this issue? PD: MySQL version is 4.1.22, I know is very old an EOL, unfortunately I have to make these statements on this version. Thanks!

    Read the article

  • How do I make XTerm not use bold?

    - by mike
    I like using XTerm, I like its default "fixed" font, and I like using terminal colors rather than having a monochromatic terminal. However, XTerm seems to insist on using a bold version of the font whenever it's displaying a bright color: I hate hate hate the bold version of the font, but I like the brightness. The man page seems to suggest that adding "XTerm.VT100.boldMode:false" to my ~/.Xresources would disable this "feature", but it doesn't seem to have any effect. I've had it in there for months, so it's not a rebooting issue. How can I force XTerm to always use the standard, non-bold version of the fixed font, even when it's displaying bright text? Edit: Some have suggested putting "XTerm*boldMode: false" in my ~/.Xresources. That didn't help either. I've confirmed that the changes have taken effect with xrdb, though: $ xrdb -query | grep boldMode XTerm*boldMode: false And if i run xprop and click an xterm, I get "WM_CLASS(STRING) = "xterm", "XTerm"" .. so i'm definitely running real xterms. BTW, this is just a plain-vanilla Ubuntu Intrepid box. If anyone else here is running the same, can you try running: echo -e '#\e[1m#' ...and let me know whether the # on the right has a black pixel in the middle like the one on the left does?

    Read the article

  • How can I parse/ transform text log data before it gets captured in SCOM 2007 R2?

    - by Abs
    I'm pretty much a noob with System Center Operations Manager 2007, and I'm probably missing something pretty basic, but I'm stumped anyway. We're setting up monitoring on some of our servers, and we'd like to capture data from some plain text log files (e.g. DNS debug logs, DHCP logs). It looks to me like I can set up a generic text file monitoring rule and get events captured into the main Ops Manager database, but my understanding is that the whole line of text from the plain text log gets captured as one field. In an ideal world, we'd be able to parse or transform that log file data to make it easier to query later. Is this possible? Is it easy? Do I have to buy expensive 3rd-party software to do it? One more thing: it would be even better if there was a way to stuff this data into the Audit Collection Services (ACS) database instead of the main one, but I'll take what I can get. Any help would be greatly appreciated.

    Read the article

  • Listing group members using ldapsearch

    - by colemanm
    Our corporate LDAP directory is housed on a Snow Leopard Server Open Directory setup. I'm trying to use the ldapsearch tool to export an .ldif file to import into another external LDAP server to authenticate with externally; basically trying to be able to use the same credentials internally and externally. I've got ldapsearch working and giving me the contents and attributes of everything in the "Users" OU, and even filtering down to only the attributes I need: ldapsearch -xLLL -H ldap://server.domain.net / -b "cn=users,dc=server,dc=domain,dc=net" objectClass / uid uidNumber cn userPassword > directorycontents.ldif That gives me a list of users and properties that I can import to my remote OpenLDAP server. dn: uid=username1,cn=users,dc=server,dc=domain,dc=net objectClass: inetOrgPerson objectClass: posixAccount objectClass: organizationalPerson uidNumber: 1000 uid: username1 userPassword:: (hashedpassword) cn: username1 However, when I try the same query on an OD "group" instead of a "container," the results are something like this: dn: cn=groupname,cn=groups,dc=server,dc=domain,dc=net objectClass: posixGroup objectClass: apple-group objectClass: extensibleObject objectClass: top gidNumber: 1032 cn: groupname memberUid: username1 memberUid: username2 memberUid: username3 What I really want is a list of users from the top example filtered based on their group memberships, but it looks like membership is set from the Group side, rather than the user account side. There must be a way to filter this down and only export what I need, right?

    Read the article

  • Question about Displaying Documents and the CQWP in MOSS 2007

    - by Psycho Bob
    My organization is in the process of converting our intranet over to a SharePoint solution. Part of this intranet will be the movement and organization of all our internal documents. Currently, we have 11 pages of document links, each with its own subheadings. So far I have it set where each document has a custom field called "Page" with a check box list of all the document pages on the intranet site. On each individual page, I have setup a Content Query Web Part that displays the documents that have the corresponding Page value set (i.e. if a document Page value has been checked for "HR" it will appear on the HR page). The goal of this setup is to allow the nontechnical personal who will be responsible for the maintenance of the documents to be able to upload new documents to the documents list and note on which pages they should appear on without having to manually update the pages themselves. The problem that I am having is that I cannot seem to find a good way to sort the documents into their subheadings once they are on the appropriate page. I could create individual check boxes for each page/subheading combination, but this would create a list of approximately 50-75 items. Does anyone have any ideas as to how I could accomplish this, either via CQWP or by different means? Goals/Requirements of Installation Allow Intranet documents to be maintained by nontechnical personnel Display documents on the appropriate pages without user having to edit actual page or web part Denote document page location using user settable document attributes (if possible) Maintain current intranet organization and workflow Use only one document list without subdirectories NOTE: I am aware that this is not the most efficient or elegant way to do things, but these are the requirements I have been given for the project.

    Read the article

  • Symbolic links not working in MySQL

    - by Eno
    I'm having an issue, I searched a lot but I'm not sure if it's related to a previous security patch. On the last version of MySQL on Debian Lenny ( 5.0.51a-24 ) I need to share one table between two db, those two db are in the same path ( /var/lib/mysql/db1 & db2 ). I created symbolic links for db2 pointing to the table in db1. When I query the same table from db2 I get this : 'ERROR 1030 (HY000): Got error 140 from storage engine' This is how it looks : test-lan:/var/lib/mysql/test3# ls -alh drwx------ 2 mysql mysql 4.0K 2010-08-30 13:28 . drwxr-xr-x 6 mysql mysql 4.0K 2010-08-30 13:29 .. lrwxrwxrwx 1 mysql mysql 28 2010-08-30 13:28 blbl.frm -> /var/lib/mysql/test/blbl.frm lrwxrwxrwx 1 mysql mysql 28 2010-08-30 13:28 blbl.MYD -> /var/lib/mysql/test/blbl.MYD lrwxrwxrwx 1 mysql mysql 28 2010-08-30 13:28 blbl.MYI -> /var/lib/mysql/test/blbl.MYI -rw-rw---- 1 mysql mysql 65 2010-08-30 13:24 db.opt I really need those symlinks, is there a way to make them working like before ? ( old MySQL-server is fine ) Thanks,

    Read the article

  • Listing group members using ldapsearch

    - by colemanm
    Our corporate LDAP directory is housed on a Snow Leopard Server Open Directory setup. I'm trying to use the ldapsearch tool to export an .ldif file to import into another external LDAP server to authenticate with externally; basically trying to be able to use the same credentials internally and externally. I've got ldapsearch working and giving me the contents and attributes of everything in the "Users" OU, and even filtering down to only the attributes I need: ldapsearch -xLLL -H ldap://server.domain.net / -b "cn=users,dc=server,dc=domain,dc=net" objectClass / uid uidNumber cn userPassword > directorycontents.ldif That gives me a list of users and properties that I can import to my remote OpenLDAP server. dn: uid=username1,cn=users,dc=server,dc=domain,dc=net objectClass: inetOrgPerson objectClass: posixAccount objectClass: organizationalPerson uidNumber: 1000 uid: username1 userPassword:: (hashedpassword) cn: username1 However, when I try the same query on an OD "group" instead of a "container," the results are something like this: dn: cn=groupname,cn=groups,dc=server,dc=domain,dc=net objectClass: posixGroup objectClass: apple-group objectClass: extensibleObject objectClass: top gidNumber: 1032 cn: groupname memberUid: username1 memberUid: username2 memberUid: username3 What I really want is a list of users from the top example filtered based on their group memberships, but it looks like membership is set from the Group side, rather than the user account side. There must be a way to filter this down and only export what I need, right?

    Read the article

  • Object Not found - Apache Rewrite issue

    - by Chris J. Lee
    I'm pretty new to setting up apache locally with xampp. I'm trying to develop locally with xampp (Ubuntu 11.04) linux 1.7.4 for a Drupal Site. I've actually git pulled an exact copy of this drupal site from another testing server hosted at MediaTemple. Issue I'll visit my local development environment virtualhost (http://bbk.loc) and the front page renders correctly with no errors from drupal or apache. The issue is the subsequent pages don't return an "Object not found" Error from apache. What is more bizarre is when I add various query strings and the pages are found (like http://bbk.loc?p=user). VHost file NameVirtualHost bbk.loc:* <Directory "/home/chris/workspace/bbk/html"> Options Indexes Includes execCGI AllowOverride None Order Allow,Deny Allow From All </Directory> <VirtualHost bbk.loc> DocumentRoot /home/chris/workspace/bbk/html ServerName bbk.loc ErrorLog logs/bbk.error </VirtualHost> BBK.error Error Log File: [Mon Jun 27 10:08:58 2011] [error] [client 127.0.0.1] File does not exist: /home/chris/workspace/bbk/html/node, referer: http://bbk.loc/ [Mon Jun 27 10:21:48 2011] [error] [client 127.0.0.1] File does not exist: /home/chris/workspace/bbk/html/sites/all/themes/bbk/logo.png, referer: http://bbk.$ [Mon Jun 27 10:21:51 2011] [error] [client 127.0.0.1] File does not exist: /home/chris/workspace/bbk/html/node, referer: http://bbk.loc/ Actions I've taken: Move Rewrite module loading to load before cache module http://drupal.org/node/43545 Verify modrewrite works with .htaccess file Any ideas why mod_rewrite might not be working?

    Read the article

  • Hyper-V Machine drifts time all over, even with NTP

    - by MichaelGG
    Resolved The problem was Hyper-V on that machine. I removed Hyper-V, installed VMware Server, ran the same VM. Time sync issues went away (< 100ms difference after a day). My setup is like this: HYV1 - HyperV machine (non domain) - sync irrelevant AD1 - VM AD server on HYV1, sync'd to time.nist.gov. HyperV time sync off. S1 - Physical machine, sync'd to domain. S2 - Physical machine running HyperV, sync'd to domain. V1 - Linux VM machine on S2, sync'd to AD1. No HyperV integration. AD1 and S1 have fine sync -- stripchart shows less than 100ms difference. S2 drifts like crazy. Here's a bit of the stripchart against AD1: 18:33:22 d:+00.0010138s o:+05.4101899s 18:33:24 d:+00.0010138s o:+05.4319765s 18:33:26 d:+00.0000000s o:+05.4788429s 18:33:28 d:+00.0000000s o:+05.6089942s 18:33:30 d:+00.0010138s o:+05.7240269s 18:33:32 d:+00.0000000s o:+06.0421911s 18:33:34 d:+00.0081104s o:+06.5613708s 18:33:37 d:+00.0000000s o:+06.9096594s 18:33:39 d:+00.0000000s o:+06.8867838s 18:33:41 d:+00.0010127s o:+06.8936401s In 20 seconds, it drifted over a second. If I manually reset it to within 1s, within a few minutes it'll be back drifting about 2 seconds. Overnight it went from ~2s to ~5s. The Linux VM inside S2 has perfect sync with AD1. Here's the config: C:\Users\mgg>w32tm /dumpreg /subkey:Parameters Value Name Value Type Value Data ------------------------------------------------------------ ServiceDll REG_EXPAND_SZ %systemroot%\system32\w32time.dll ServiceMain REG_SZ SvchostEntry_W32Time ServiceDllUnloadOnStop REG_DWORD 1 Type REG_SZ NT5DS NtpServer REG_SZ ad01.mydomain ad02.mydomain C:\Users\mgg>w32tm /dumpreg /subkey:Config Value Name Value Type Value Data ----------------------------------------------------------- FrequencyCorrectRate REG_DWORD 4 PollAdjustFactor REG_DWORD 5 LargePhaseOffset REG_DWORD 50000000 SpikeWatchPeriod REG_DWORD 900 LocalClockDispersion REG_DWORD 9 HoldPeriod REG_DWORD 5 PhaseCorrectRate REG_DWORD 1 UpdateInterval REG_DWORD 30000 EventLogFlags REG_DWORD 2 AnnounceFlags REG_DWORD 5 TimeJumpAuditOffset REG_DWORD 28800 MinPollInterval REG_DWORD 2 MaxPollInterval REG_DWORD 8 MaxNegPhaseCorrection REG_DWORD -1 MaxPosPhaseCorrection REG_DWORD -1 MaxAllowedPhaseOffset REG_DWORD 300 I looked at the event log, and apart from warnings about sync (after it gets way out of sync), there's no other warnings. How can I go about troubleshooting this? It's the only machine that is having this problem. All the other machines (physical and virtual) are doing fine. Edit: To clarify: The VM (AD1) has integration turned off and syncs to time.nist.gov. AD1 is fine. It's the physical machine S1 that can't sync to AD1 and drifts all over. All the other physical servers are able to sync to AD1 just fine. Update So, it appears to be an issue of running the VM. The clock slips slowly with the VM off. Turned on, it immediately starts losing seconds. I swt the VM to only use half the resources, and that seems to have slightly mitigated it, for now. Thanks!

    Read the article

  • SharePoint Business Connectivity Services (BCS) Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'

    - by g18c
    I am running SharePoint 2010 with SQL 2012, I am trying to get Business Connectivity Services (BCS) running but I am facing a double-hope authentication issue. Everytime I try to connect to the external BCS list created in SharePoint designer, I get the error Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. In the event viewer on the SQL server I see a login failure for an anonymous user from the SP server IP address. Background information below: I have enabled Kerberos under SharePoint Central admin. I have the following AD domain accounts: SP_Farm - main website pool SP_Services - for SharePoint services (including BCS) SQL_Engine - SQL database engine I then created the following with SetSPN: SetSPN -S http/intranet mydomain\SP_Farm SetSPN -S http/intranet.mydomain.local mydomain\SP_Farm SetSPN -S SPSvc/SPS mydomain\SP_Farm SetSPN -S MSSQLSvc/SQL1 mydomain\SQL_DatabaseEngine SetSPN -S MSSQLSvc/SQL1.mydomain.local mydomain\SQL_DatabaseEngine SetSPN -S MSSQLSvc/SQL1:1433 mydomain\SQL_DatabaseEngine SetSPN -S MSSQLSvc/SQL1.mydomain.local:1433 mydomain\SQL_DatabaseEngine I then delegated the AD accounts for any authentication protocol to the following: SP_Farm - SP_Farm (http service type, intranet) SP_Farm - SQL_DatabaseEngine (MSSQLSvc, sql1) SP_Service - SP_Service (SPSvc) SP_Service - SQL_DatabaseEngine (MSSQLSvc, sql1) I have also checked the WFE is being logged on to with Kerberos, with the WFE server event log showing event ID 4624 with Kerberos authentication, this is OK. The SQL is also showing connections authenticated as Kerberos from the WFE with the following query: Select s.session_id, s.login_name, s.host_name, c.auth_scheme from sys.dm_exec_connections c inner join sys.dm_exec_sessions s on c.session_id = s.session_id Despite the above, credentials are not passed from the client through the SharePoint server to the SQL server, only the anonymous account is used. I get the following error in the WFE server for 'BusinessData' ID 8080: Could not open connection using 'data source=sql1.mydomain.local;initial catalog=MSCRM;integrated security=SSPI;pooling=true;persist security info=false' in App Domain '/LM/W3SVC/1848937658/ROOT-1-129922939694071446'. The full exception text is: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. If I set a username and password with the Secure Store Service and set the external list to use the impersonated credentials, the list works. Any ideas what I have missed and what can be tried next?

    Read the article

  • MSSQLSERVER Will Not Start - Event ID 913 and 1814

    - by ThaKidd
    Hello ServerFault! I need some serious help. I have a major database server down and am scratching my head at how to fix it. The server was hit by rolling black outs last week in Dallas and sense then, Microsoft SQL 2005 SP2 will not start up. I am getting the following errors (both when starting the service and while trying to execute mssqlsrv.exe -c -f -m: Event Type: Error Event Source: MSSQLSERVER Event ID: 913 Could not find database ID 3. Database may not be activated yet or may be in transition. Reissue the query once the database is available. If you do not think this error is due to a database that is transitioning its state and this error continues to occur, contact your primary support provider. Please have available for review the Microsoft SQL Server error log and any additional information relevant to the circumstances when the error occurred. and... Event Type: Information Event Source: MSSQLSERVER Event ID: 1814 Could not create tempdb. You may not have enough disk space available. Free additional disk space by deleting other files on the tempdb drive and then restart SQL Server. Check for additional errors in the event log that may indicate why the tempdb files could not be initialized. I have tried to rename the tempdb.mdf to tempdb.old with no success. I have checked and have 193 GB of free hard drive space. What else might cause this problem? Could the server need a chkdsk ran on it or do I need to be looking at some area of the database server? Any help is greatly appreciated. Thank you in advance.

    Read the article

  • Outlook 2007/2010 autodiscovering old Exchange info

    - by Dan
    I currently have an Exchange setup as follows: two Exchange 2003 servers clustered together set up as the current mailbox stores, one Exchange 2003 setup as a frontend, one Exchange 2007 set up as a frontend (was set up for testing by my predecessor, never really used intentionally), and now four Exchange 2010 servers - two mailboxes in a DAG and two with Hub/CAS. Everything seems to be working fine with one exception - Outlook 2007/2010 clients are still autodiscovering the test 2007 frontend and not the 2010 CAS array. I know this because there's an expired cert on the 2007 box so the client displays a cert error when you attempt to autocreate the outlook profile. From what I've read, there is an SCP (Service Connection Point) in AD that is pointing to the old server and it is getting returned first, causing Outlook to try it first. How can I prevent Outlook from even attempting to connect to this 2007 box from now on? http://www.msexchange.org/articles_tutorials/exchange-server-2010/management-administration/exchange-autodiscover.html When Outlook 2007 is installed on a domain joined workstation then the Outlook client will query Active Directory for the Autodiscover information. Active Directory will return a list of SCP’s and the Outlook client will automatically select the first SCP in this list. Using the information found in the SCP the Outlook client will contact the Client Access Server for its configuration information and the Outlook client will be configured automatically.

    Read the article

  • Kickstart based installation of RHEL 6.4 from USB

    - by Peter
    I want to setup some brand new servers with RHEL 6.4. Servers do not have DVD, so, I have to use USB for the installation. I already have a custom ISO with a kickstart file that I use on servers with DVD flawlessly. I used iso2usb to move the ISO t? my USB. When I boot from the USB, the ks file is found, anaconda starts, but then stops with the following error: "The installation source given by device ['sda1'] could not be found. Please check your parameters and try again" Notes: The USB IS the sda. My custom ISO file is renamed to linux.iso from iso2usb and it is present in the root directory of the USB. Kickstart file has the following entry: harddrive --partition=sda1 --dir=/ Please help me to automate the installation with kickstart. Edit 1: This the anaconda.log file: 09:01:57,029 INFO : no /etc/zfcp.conf; not configuring zfcp 09:01:57,259 INFO : created new libuser.conf at /tmp/libuser.4rAbps with instPath="/mnt/sysimage" 09:01:57,259 INFO : anaconda called with cmdline = ['/usr/bin/anaconda', '--stage2', 'hd:sda1:///images/install.img', '--dlabel', '--kickstart', '/tmp/ks.cfg', '--graphical', '--selinux', '--lang', 'en_US.UTF-8', '--keymap', 'us', '--repo', 'hd:sda1:/'] 09:01:57,260 INFO : Display mode = g 09:01:57,260 INFO : Default encoding = utf-8 09:01:59,444 DEBUG : X server has signalled a successful start. 09:01:59,446 INFO : Starting window manager, pid 1345. 09:01:59,537 INFO : Starting graphical installation. 09:01:59,741 INFO : Detected 7968M of memory 09:01:59,741 INFO : Swap attempt of 7968M 09:02:00,840 INFO : ISCSID is /usr/sbin/iscsid 09:02:00,840 INFO : no initiator set Edit 2: This is the part of anaconda log that indicates that it found the USB etc: 09:01:47,918 INFO : starting STEP_STAGE2 09:01:47,918 INFO : partition is sda1, dir is //images/install.img 09:01:47,918 INFO : mounting device sda1 for hard drive install 09:01:48,005 INFO : Path to stage2 image is /mnt/isodir///images/install.img 09:01:54,214 INFO : mounted loopback device /mnt/runtime on /dev/loop0 as /tmp/install.img 09:01:54,214 INFO : Looking for updates for HD in /mnt/isodir///images/updates.img 09:01:54,214 INFO : Looking for product for HD in /mnt/isodir///images/product.img 09:01:54,227 INFO : got stage2 at url hd:sda1:///images/install.img 09:01:54,254 INFO : Loading SELinux policy 09:01:54,700 INFO : getting ready to spawn shell now 09:01:54,975 INFO : Running anaconda script /usr/bin/anaconda 09:01:56,882 INFO : _Fedora is the highest priority installclass, using it 09:01:56,921 INFO : Running kickstart %%pre script(s) 09:01:56,922 WARNING : '/bin/sh' specified as full path 09:01:56,926 INFO : All kickstart %%pre script(s) have been run

    Read the article

  • Viability of Apache (MPM Worker), FastCGI PHP 4/5.2/5.3, and MySQL 5

    - by Adrian
    My server will be hosting numerous PHP web applications ranging from Joomla, Drupal, and some legacy (read: PHP4) and other custom-built code inherited from clients. This will be a development machine used by a dozen or so web developers and issues like fluctuating loads or particularly high load expectations are not important. Now, my question: are there any concerns I should know about when using Apache w/ MPM Worker, PHP 4/PHP 5.2/PHP 5.3 (all via FastCGI), and MySQL 5 (with a query cache of 64MB)? I have not tested the various applications extensively and I have only recently learned how to install PHP and utilize it via FastCGI (rather than mod_php, which in this case seemed impossible (considering the multiple versions of PHP and the desire to use MPM Worker over MPM Prefork)). I have come to understand that there could be concerns regarding XCache and APC, namely non-thread-safety issues where data becomes corrupted and the capability to use MPM Worker becomes null and void. Is this a valid concern? I have been using my personal testing server (running Ubuntu Server Edition 10.04 in VirtualBox) which has 2GB of RAM available to it. Here is the configuration used (the actual server will likely use a configuration more tailored to suit it's purposes): Apache: Server version: Apache/2.2.14 (Ubuntu) Server built: Apr 13 2010 20:22:19 Server's Module Magic Number: 20051115:23 Server loaded: APR 1.3.8, APR-Util 1.3.9 Compiled using: APR 1.3.8, APR-Util 1.3.9 Architecture: 64-bit Server MPM: Worker threaded: yes (fixed thread count) forked: yes (variable process count) Worker: <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 400 MaxRequestsPerChild 2000 </IfModule> PHP ./configure (PHP 4.4.9, PHP 5.2.13, PHP 5.3.2): --enable-bcmath \ --enable-calendar \ --enable-exif \ --enable-ftp \ --enable-mbstring \ --enable-pcntl \ --enable-soap \ --enable-sockets \ --enable-sqlite-utf8 \ --enable-wddx \ --enable-zip \ --enable-fastcgi \ --with-zlib \ --with-gettext \ Apache php-fastcgi-setup.conf FastCgiServer /var/www/cgi-bin/php-cgi-5.3.2 FastCgiServer /var/www/cgi-bin/php-cgi-5.2.13 FastCgiServer /var/www/cgi-bin/php-cgi-4.4.9 ScriptAlias /cgi-bin-php/ /var/www/cgi-bin/

    Read the article

  • IE and Google Chrome timeout on an IIS6 hosted SSL page that Firefox handles well.

    - by Thomas
    Ok, here's the scenario: Up until a few weeks ago, none of us noticed anything wrong with the corporate website. People were using it without complaint. Then, a client complained that a specific page on the site was timing out for him, and only when he committed a POST action on a form filled with data. I checked it out, and it timed out for me, too. But, it only timed out in Google Chrome and IE, not in Firefox. Additionally, the same page, on the same server, but served from a different domain name (one not under the protection of SSL, either) does not time out under any browser. To clarify: https://www.mysite.com/changes.php times out on POST, but the same with http works fine. That distinction (SSL vs. Non-SSL) seems to be important, as nothing else has changed. Our certificate is valid, and Firefox detects no errors thrown by the page. I've looked at the Request and Response headers from the page, and they all follow the correct formats. Then, after wandering through the site, I noticed a few other things. Both IE and Chrome will frequently time out on any page that is PHP-based. They never time out on static images or html files. I've looked at the site from a variety of different servers, my home and work workstations, and my netbook. Because of that, I've discounted a viral infection, as I highly doubt a virus is going to hit every one of the machines to which I have access in exactly the same manner. My setup is: Server: Win2k3, II6, PHP 5.2.9-1. Clients: IE7, IE8, Chrome (regular and dev channel): Frequent timeouts on PHP pages. Firefox 2, Firefox 3: No timeouts. Firebug shows no errors or even lengthy periods serving the pages. I've spent 2 days searching for any tech knowledge that I can find, and my search parameters are all too general. Everyone has problems loading SSL pages in IE and Chrome for a wide variety of reasons. The infrequent nature of the timeouts and the fact that there are no errors being reported anywhere is starting to drive me insane. Does anyone have any insight on a problem like this?

    Read the article

  • Shadow copy referencing invalid volume from symboliclink

    - by ccook
    I recently replaced my motherboard after the last one failed (was shorting and causing random reboots). I'm sure this was not healthy for the machine, and that a clean install would do wonders, but I'd like to fix the current install. That aside, I've been tracking down a pair of errors in the application log. Volume Shadow Copy Service error: Error calling a routine on a Shadow Copy Provider {b5946137-7b9f-4925-af80-51abd60b20d5}. Routine details IVssSnapshotProvider::QueryVolumesSupportedForSnapshots(ProviderId,29,...) [hr = 0x80042302, A Volume Shadow Copy Service component encountered an unexpected error. Check the Application event log for more information. ]. Operation: Query volumes supported by this provider Context: Provider ID: {b5946137-7b9f-4925-af80-51abd60b20d5} Snapshot Context: 29 Followed by Volume Shadow Copy Service error: Unexpected error calling routine Error calling CreateFile on volume '\?\Volume{f4bda86e-049d-11e1-9255-bcaec56690a1}\'. hr = 0x80070020, The process cannot access the file because it is being used by another process. This error is reproducible at command line, creating the two event log entries C:\Windows\system32>vssadmin list volumes vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool (C) Copyright 2001-2005 Microsoft Corp. Error: The shadow copy provider had an unexpected error while trying to process the specified command. Using WinObj from Sysinternals, I have tracked down the global object. '\?\Volume{f4bda86e-049d-11e1-9255-bcaec56690a1}\' - SymbolicLink - '\Device\HarddiskVolume8' Running DISKPART, and running the command "list volume" within it lists volumes 0 through 6, there is not a HarddiskVolume8. How can I remove this reference to HarddiskVolume8, and get shadow copy up and running?

    Read the article

  • GeForce 8800GT not even giving basic output

    - by Sam
    My Dad bought a GeForce 8800GT graphics card quite a long time ago now. It has never worked in his PC. Print out from a dxdiag: System Information Time of this report: 4/13/2010, 19:52:40 Machine name: USER-PC Operating System: Windows Vista™ Home Premium (6.0, Build 6001) Service Pack 1 (6001.vistasp1_gdr.091208-0542) Language: English (Regional Setting: English) System Manufacturer: To Be Filled By O.E.M. System Model: To Be Filled By O.E.M. BIOS: Default System BIOS Processor: Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz (4 CPUs), ~2.3GHz Memory: 2046MB RAM Page File: 1045MB used, 3296MB available Windows Dir: C:\Windows DirectX Version: DirectX 10 DX Setup Parameters: Not found DxDiag Version: 6.00.6001.18000 32bit Unicode DxDiag Notes Display Tab 1: No problems found. Sound Tab 1: No problems found. Sound Tab 2: No problems found. Sound Tab 3: No problems found. Input Tab: No problems found. DirectX Debug Levels Direct3D: 0/4 (retail) DirectDraw: 0/4 (retail) DirectInput: 0/5 (retail) DirectMusic: 0/5 (retail) DirectPlay: 0/9 (retail) DirectSound: 0/5 (retail) DirectShow: 0/6 (retail) Display Devices Card name: ATI Radeon HD 2400 PRO Manufacturer: ATI Technologies Inc. Chip type: ATI Radeon Graphics Processor (0x94C3) DAC type: Internal DAC(400MHz) Device Key: Enum\PCI\VEN_1002&DEV_94C3&SUBSYS_00000000&REV_00 Display Memory: 1012 MB Dedicated Memory: 245 MB Shared Memory: 767 MB Current Mode: 1280 x 960 (32 bit) (75Hz) Monitor: Generic PnP Monitor Driver Name: atiumdag.dll,atiumdva.dat,atitmmxx.dll Driver Version: 7.14.0010.0523 (English) DDI Version: 10 Driver Attributes: Final Retail Driver Date/Size: 8/22/2007 02:43:14, 3021312 bytes That info is from the current card that is installed in it and has been installed since its purchase roughly 3-4 years ago. When I physically install the card I put it into a purple slot on the motherboard that the old card was in (if I go into the device manager and select properties on the current card it confirms that the slot is a "PCI Slot 16 (PCI bus 2, device 0, function 0)") and boot up the computer but get absolutely no output. The screen that we have registers that it is connected to something (by not displaying the screen it does when the cable is unplugged) but just remains blank, no output at all. I recently took the card to my University and one of my friends who is better with hardware issues than I am tried it in his system and it worked perfectly. No issues whatsoever. I do not have a spec list for his system but I could get one if you need it. If you need any more information on this issue I will be happy to supply you with it as I am starting to get very annoyed with this problem ^_^

    Read the article

  • Firefox 29 - how do I delete history entries visited fewer than x times

    - by lousyuser
    Context: I've been using my Firefox profile for a couple of years now. My history file has become huge, naturally. I got Firefox Sync set up between my main desktop PC and my laptop. HW configs: PC: i5-3450, 8 GB DDR3 RAM, Crucial M4 128 GB SSD laptop: Pentium SU4100, 4 GB DDR3 RAM, WD 5400 rpm HDD Accessing history entries when typing into the Awesome Bar on my desktop takes quite a long time despite the decent config, the laptop is even slower. The experience is quite unresponsive. I figured if I cleared the history up a little bit, I might avoid creating a new profile to speed things up. The question itself: to illustrate: Is there a way to delete all history entries that have been visited fewer than x (let's say 5) times and at the same time the recent visit is fewer than y (let's say 120) days old? afaik the history file is some kind of SQL database, but I'm not really sure how the data is saved, if there's a "safe way" to edit it and what the query to do what I need would look like. Thanks in advance for any help. I kept browsing through previous SuperUser questions to see if I could find relevant information. "In my Firefox profile directory, there is a filed named places.sqlite. Opening it with sqlite reveals (amongst others) the tables moz_places and moz_historyvisits. It seems that moz_historyvisits uses the primary of moz_places to refer to the URLs." As I'm unfamiliar with databases, I don't really understand the way the two tables mentioned in the quote are related. screenshot of a part of the tables I've noticed the visit_count is in a standard format, making it easy to work with. The last_visit_date looks encrypted to my naked eye, but I can't see in which way. Hope that helps, I'm at my wits' end.

    Read the article

  • Chrome caching 302 redirects

    - by Thermionix
    I have a php script with is used to rotate banner images on a site. Under Firefox/IE page refreshes will make another request and a different image will be returned. Under Chrome, the request seems to be cached and only opening the page in a new tab will cause it to actually query the script. I believe this used to work in older versions of chrome, I've tried a few different types of redirect codes all with the same result. Any tips? <img class="banner" src="/inc/banner.php" alt=""> ~$ cat /var/www/inc/banner.php <?php header("HTTP/1.1 302 Redirect"); header("Cache-Control: max-age=0, no-cache, no-store, must-revalidate"); //header('HTTP/1.1 307 Temporary Redirect'); //header("expires: none"); //header("expires: max"); //header("Cache-Control: public"); $folder = '../img/banner/'; $exts = 'jpg jpeg png gif'; $files = array(); $i = -1; if ('' == $folder) $folder = './'; $handle = opendir($folder); $exts = explode(' ', $exts); while (false !== ($file = readdir($handle))) { foreach($exts as $ext) { // for each extension check the extension if (preg_match('/\.'.$ext.'$/i', $file, $test)) { // faster than ereg, case insensitive $files[] = $file; // it's good ++$i; } } } closedir($handle); // We're not using it anymore mt_srand((double)microtime()*1000000); // seed for PHP < 4.2 $rand = mt_rand(0, $i); // $i was incremented as we went along header('Location: '.$folder.$files[$rand]); flush(); ?> curl output; ~$ curl -I -k https://example.net/inc/banner.php HTTP/1.1 302 Redirect Server: nginx/1.1.14 Date: Fri, 24 Feb 2012 03:23:46 GMT Content-Type: text/html Connection: keep-alive X-Powered-By: PHP/5.3.10-1ubuntu1 Cache-Control: max-age=0, no-cache, no-store, must-revalidate Location: ../img/banner/2.jpg

    Read the article

  • How do I keep a table in sync across multiple SQL Databases?

    - by Refracted Paladin
    I have a Win Form, Data Entry, application that uses 4 seperate Data Bases. This is an occasionally connected app that uses Merge Replication (SQL 2005) to stay in Sync. This is working just fine. The next hurdle I am trying to tackle is adding Filters to my Publications. Right now we are replicating 70mbs, compressed, to each of our 150 subscribers when, truthfully, they only need a tiny fraction of that. Using Filters I am able to accomplish this(see code below) but I had to make a mapping table in order to do so. This mapping table consists of 3 columns. A PrimaryID(Guid), WorkerName(varchar), and ClientID(int). The problem is I need this table present in all FOUR Databases in order to use it for the filter since, to my knowledge, views or cross-db query's are not allowed in a Filter Statement. What are my options? Seems like I would set it up to be maintained in 1 Database and then use Triggers to keep it updated in the other 3 Databases. In order to be a part of the Filter I have to include that table in the Replication Set so how do I flag it appropriately. Is there a better way, altogether? SELECT <published_columns> FROM [dbo].[tblPlan] WHERE [ClientID] IN (select ClientID from [dbo].[tblWorkerOwnership] where WorkerID = SUSER_SNAME()) Which allows you to chain together Filters, this next one is below the first one so it only pulls from the first's Filtered Set. SELECT <published_columns> FROM [dbo].[tblPlan] INNER JOIN [dbo].[tblHealthAssessmentReview] ON [tblPlan].[PlanID] = [tblHealthAssessmentReview].[PlanID] P.S. - I know how illogical the DB structure sounds. I didn't make it. I inherited it and was then told to make it a "disconnected app." Go figure!

    Read the article

  • Google respond differently to two identical nginx setups and 200 codes; any ideas?

    - by Yuji Tomita
    I'm rather confused... I have a linode.com VPS which has been cloned recently, so the settings are the same between nginx servers. One lives on a dev subdomain, one on a www. I'm trying to run a google experiment on my live server, which claims: Web server rejects utm_expid. Your server doesn't support added query arguments in URLs. My logs show on the dev server where it works: 74.125.186.32 - - [13/Sep/2012:13:33:45 -0700] "GET /product/iphone-case/?utm_expid=25706866-0 HTTP/1.1" 200 12521 "-" "Google_Analytics_Content_Experiments 74.125.186.32 - - [13/Sep/2012:13:33:45 -0700] "GET /product/iphone-case/?ab_reviews=True&utm_expid=25706866-0 HTTP/1.1" 200 14679 "-" "Google_Analytics_Content_Experiments My production server shows google making a second request. 74.125.186.41 - - [13/Sep/2012:13:34:49 -0700] "GET /product/iphone-case/?ab_reviews=on&utm_expid=25706866-1 HTTP/1.1" 200 12104 "-" "Google_Analytics_Content_Experiments 74.125.186.41 - - [13/Sep/2012:13:34:49 -0700] "GET /product/iphone-case/?utm_expid=25706866-1 HTTP/1.1" 200 12122 "-" "Google_Analytics_Content_Experiments 74.125.186.41 - - [13/Sep/2012:13:34:49 -0700] "GET /product/iphone-case/ <--- A second request for some reason. HTTP/1.1" 200 12522 "-" "Google_Analytics_Content_Experiments I'm not sure how google determines why it needs to send a second request without the querystring. The original request has clearly sent a 200 OK status response. Does anybody have any suggestions where to look next? The HTML (compared by diff) on the two pages is exactly the same.

    Read the article

  • svnsync loses revision properties although hook installed

    - by roesslerj
    Hello all! I have a pretty weird problem. We have setup an SVN-Mirror via cronjob (because it needs to go from inside to outside of a firewall, so no post-commit-hook possible) and svnsync. We installed a pre-revprop-hook just as told. Everything seems to work fine, except that it doesn't. E.g. when manually executing the script. # svnsync --non-interactive sync file://<path-to-mirror> --source-username <usr> --source-password <pwd> Committed revision 19817. Copied properties for revision 19817. No error, no complaints. But if checking for the revision properties it says: # svnlook info <path-to-mirror> 0 # svn info -r HEAD file://<path-to-mirror> 2>&1 Path: <root-of-mirror> URL: file://<path-to-mirror> Repository Root: file://<path-to-mirror> Repository UUID: <uid> Revision: 19817 Node Kind: directory Last Changed Rev: 19817 So somehow the author and timestamp information gets lost. But we need that information for our internal processes. Since no error or warning is produced I have absolutely no idea even where to start to look. Everything is local (except for the remote master), so there are no server-logs to look at. I also tried to manually recopy via svnsync copy-revprops (http://chestofbooks.com/computers/revision-control/subversion-svn/svnsync-Copy-revprops-Ref-svnsync-C-Copy-revprops.html). It says Copied properties for revision 19885. But when I query them, it's just the same. Any ideas how I could approach that problem, or even better -- how to solve it? Any ideas appreciated.

    Read the article

  • CPU / Affinity mask problem in SQL 2005

    - by Robert Moir
    Hi folks, Having a problem with a SQL Server which was virtualised. The CPU mask was set on the physical host for some reason and now advanced options are not available. So I need to reconfigure the CPU affinity mask settings - which are advanced options, so this is blocked because of the affinity mask issue. I've tried doing this from the SQL server in single user command line mode, I've googled and found lots of people with similar problems but no real solution. I'm stumped. Any ideas? Sample commands and output from query analyser below. sp_configure 'show advanced options', 1 GO RECONFIGURE WITH OVERRIDE GO sp_configure 'affinity mask', 0x00000000 GO RECONFIGURE GO ----------------------------------------- Configuration option 'show advanced options' changed from 0 to 1. Run the RECONFIGURE statement to install. Msg 5832, Level 16, State 1, Line 1 The affinity mask specified does not match the CPU mask on this system. Msg 15123, Level 16, State 1, Procedure sp_configure, Line 51 The configuration option 'affinity mask' does not exist, or it may be an advanced option.

    Read the article

  • Using Windows Explorer, how to find file names starting with a dot (period), in 7 or Vista?

    - by Chris W. Rea
    I've got a MacBook laptop in the house, and when Mac OS X copies files over the network, it often brings along hidden "dot-files" with it. For instance, if I copy "SomeUtility.zip", there will also be copied a hidden ".SomeUtility.zip" file. I consider these OS X dot-files as useless turds of data as far as the rest of my network is concerned, and don't want to leave them on my Windows file server. Let's assume these dot-files will continue to happen. i.e. Think of the issue of getting OS X to stop creating those files, in the first place, to be another question altogether. Rather: How can I use Windows Explorer to find files that begin with a dot / period? I'd like to periodically search my file server and blow them away. I tried searching for files matching ".*" but that yielded – and not unexpectedly – all files and folders. Is there a way to enter more specific search criteria when searching in Windows Explorer? I'm referring to the search box that appears in the upper-right corner of an Explorer window. Please tell me there is a way to escape my query to do what I want? (Failing that, I know I can map a drive letter and drop into a cygwin prompt and use the UNIX 'find' command, but I'd prefer a shiny easy way.)

    Read the article

  • MySQL Master-Master w/ multiple read slave cost effective setup in AWS

    - by Ross
    I've been evaluating Amazon Web Services RDS for MySQL and costing out potential scenarios involving a simple multi-AZ deployment read/write setup vs. a multi-AZ deployment mysql master (hot-standby) with additional read-only slaves. the issue I'm trying to cost-optimize includes their reserved instance vs on-demand instances. Situation 1: purchase reserved multi-az setup for Extra-large-hi-mem(17GB RAM) instance for $5200/yr and have my application query the master all the time. the problem is, if I don't need all the resources of the (17GB RAM) all the time and therefore, especially not a hot-standby, what alternatives for savings can a better topology create, like potentially situation 2 below: Situation 2: purchase reserved multi-az setup using smaller master instances than above for the master-master hot-standby to receive the writes only. Then create and load balance several read-only slaves off the master and add/remove and/or scale up/down the read slaves based on demand. This might only cost $1000 + the on-demand usage of the read slaves. My thinking is, if I have a variable read-intensive application load, with low write load, the single level topology in situation 1 means I'm paying for a lot of resources at the write level of topology when I don't need them there. My hope is that situation 2 can yield cost savings from smaller reserved instances on the master-master resource level allowing me to scale up and down and/or out on the read-level according to demand as needed. Does anyone see a downside to doing this or know of some reason this isn't possible with RDS? Any other thoughts or advice always welcome of course. Thanks in advance, R

    Read the article

< Previous Page | 821 822 823 824 825 826 827 828 829 830 831 832  | Next Page >