Search Results

Search found 10177 results on 408 pages for 'thumbs db'.

Page 377/408 | < Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >

  • Multiple Zend application code organisation

    - by user966936
    For the past year I have been working on a series of applications all based on the Zend framework and centered on a complex business logic that all applications must have access to even if they don't use all (easier than having multiple library folders for each application as they are all linked together with a common center). Without going into much detail about what the project is specifically about, I am looking for some input (as I am working on the project alone) on how I have "grouped" my code. I have tried to split it all up in such a way that it removes dependencies as much as possible. I'm trying to keep it as decoupled as I logically can, so in 12 months time when my time is up anyone else coming in can have no problem extending on what I have produced. Example structure: applicationStorage\ (contains all applications and associated data) applicationStorage\Applications\ (contains the applications themselves) applicationStorage\Applications\external\ (application grouping folder) (contains all external customer access applications) applicationStorage\Applications\external\site\ (main external customer access application) applicationStorage\Applications\external\site\Modules\ applicationStorage\Applications\external\site\Config\ applicationStorage\Applications\external\site\Layouts\ applicationStorage\Applications\external\site\ZendExtended\ (contains extended Zend classes specific to this application example: ZendExtended_Controller_Action extends zend_controller_Action ) applicationStorage\Applications\external\mobile\ (mobile external customer access application different workflow limited capabilities compared to full site version) applicationStorage\Applications\internal\ (application grouping folder) (contains all internal company applications) applicationStorage\Applications\internal\site\ (main internal application) applicationStorage\Applications\internal\mobile\ (mobile access has different flow and limited abilities compared to main site version) applicationStorage\Tests\ (contains PHP unit tests) applicationStorage\Library\ applicationStorage\Library\Service\ (contains all business logic, services and servicelocator; these are completely decoupled from Zend framework and rely on models' interfaces) applicationStorage\Library\Zend\ (Zend framework) applicationStorage\Library\Models\ (doesn't know services but is linked to Zend framework for DB operations; contains model interfaces and model datamappers for all business objects; examples include Iorder/IorderMapper, Iworksheet/IWorksheetMapper, Icustomer/IcustomerMapper) (Note: the Modules, Config, Layouts and ZendExtended folders are duplicated in each application folder; but i have omitted them as they are not required for my purposes.) For the library this contains all "universal" code. The Zend framework is at the heart of all applications, but I wanted my business logic to be Zend-framework-independent. All model and mapper interfaces have no public references to Zend_Db but actually wrap around it in private. So my hope is that in the future I will be able to rewrite the mappers and dbtables (containing a Models_DbTable_Abstract that extends Zend_Db_Table_Abstract) in order to decouple my business logic from the Zend framework if I want to move my business logic (services) to a non-Zend framework environment (maybe some other PHP framework). Using a serviceLocator and registering the required services within the bootstrap of each application, I can use different versions of the same service depending on the request and which application is being accessed. Example: all external applications will have a service_auth_External implementing service_auth_Interface registered. Same with internal aplications with Service_Auth_Internal implementing service_auth_Interface Service_Locator::getService('Auth'). I'm concerned I may be missing some possible problems with this. One I'm half-thinking about is a config.ini file for all externals, then a separate application config.ini overriding or adding to the global external config.ini. If anyone has any suggestions I would be greatly appreciative. I have used contextswitching for AJAX functions within the individual applications, but there is a big chance both external and internal will get web services created for them. Again, these will be separated due to authorization and different available services. \applicationstorage\Applications\internal\webservice \applicationstorage\Applications\external\webservice

    Read the article

  • cannot install firmware-b43-installer

    - by unknown
    output i get when installing the installArchives() failed: Preconfiguring packages ... Preconfiguring packages ... Preconfiguring packages ... Selecting previously deselected package menu. (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 216125 files and directories currently installed.) Unpacking menu (from .../menu_2.1.44ubuntu1_i386.deb) ... Selecting previously deselected package wifi-radar. Unpacking wifi-radar (from .../wifi-radar_2.0.s05-1.2_all.deb) ... Processing triggers for man-db ... Processing triggers for install-info ... Processing triggers for doc-base ... Processing 1 added doc-base file(s)... Registering documents with scrollkeeper... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for python-gmenu ... Rebuilding /usr/share/applications/desktop.en_US.UTF8.cache... Processing triggers for python-support ... Setting up firmware-b43-installer (4.150.10.5-5) ... --2012-10-26 08:51:30-- http://mirror2.openwrt.org/sources/broadcom-wl-4.150.10.5.tar.bz2 Resolving mirror2.openwrt.org... 46.4.11.11 Connecting to mirror2.openwrt.org|46.4.11.11|:80... failed: Connection refused. dpkg: error processing firmware-b43-installer (--configure): subprocess installed post-installation script returned error exit status 4 No apport report written because MaxReports is reached already Setting up menu (2.1.44ubuntu1) ... Processing triggers for menu ... Setting up wifi-radar (2.0.s05-1.2) ... Processing triggers for menu ... Errors were encountered while processing: firmware-b43-installer Setting up firmware-b43-installer (4.150.10.5-5) ... --2012-10-26 08:51:33-- http://mirror2.openwrt.org/sources/broadcom-wl-4.150.10.5.tar.bz2 Resolving mirror2.openwrt.org... 46.4.11.11 Connecting to mirror2.openwrt.org|46.4.11.11|:80... failed: Connection refused. dpkg: error processing firmware-b43-installer (--configure): subprocess installed post-installation script returned error exit status 4 same thing occurs when i try to install any of the wireless application. All other software installs and the same error when trying to install firmware. I tried to go the link(http://mirror2.openwrt.org/sources/broadcom-wl-4.150.10.5.tar.bz2) and download the package but found no make file found in his package. please help me.

    Read the article

  • JDK7?????

    - by Todd Bao
    ??JSR 334,JDK 7?????????????,???:1. switch????String????:    public static void switchString(String s){        switch (s){        case "db": ...        case "wls": ...        case "idm": ...        case "soa": ...        case "fa": ...        default: ...        }    }2. ?????????? - "0b"???"_"???,?????????? a. ???????????, 0b: ????????????:        byte b1 = 0b00100001;     // New        byte b2 = 0x21;        // Old        byte b3 = 33;        // Old b. ??????????????,?????,????????????:    long phone_nbr = 021_1111_2222;3. ???????????? - "?? new",????????:        ArrayList<String> al1 = new ArrayList<String>();    // Old        ArrayList<String> al2 = new ArrayList<>();        // New4. ????reflect?????????? - ReflectOperationException,????:    ClassNotFoundException,     IllegalAccessException,     InstantiationException,     InvocationTargetException,     NoSuchFieldException,     NoSuchMethodException5. catch????????,?????????,????????:    try{        // code    }    catch (SQLException | IOException ex) {        // ...    }6. ?????? - ??????????,??????????style:    public void test() throws NoSuchMethodException, NoSuchFieldException{    // ??        try{            // code        }        catch (RelectiveOperationException ex){    // ??            throws ex;        }    }7. ???try()?? - Try with Resources,?????????????(???????????java.lang.AutoCloseable??),???????try?????"("??"{":    try(BufferedReader br = new BufferedReader(new FileReader("/home/oracle/temp.txt"))){        ... br.readLine() ...    }try-with-resources?????catch,??????????catch???????Todd

    Read the article

  • Xenserver 5.6 SR_BACKEND_FAILURE_47 no such volume group, but it is there

    - by Juan Carlos
    I've looked everywhere (Google, here, a bunch of other sites), and while I have found people with similar problems, I couldn't find a single one with a solution to this. Last night our xenserver 5.6 box corrupted the /var/xapi/state.db, and I couldn't fix the xml, no matter what I did. After a good hour fiddling with the file, I figured it would be faster to just reinstall. The server had one 2tb hard drive running Xen and its VMs, and since Xen's install said it would erase the hard drive it was installed on, I plugged a new harddrive and installed Xen on it, without selecting any hard drives for storage. I Figured I could make it happen after install, using the partition on the old harddrive with all my VMs on it. After instalation finished and the system booted I did: #fdisk -l found the old partition at /dev/sda3 #ll /dev/disk/by-id found the partition at /dev/disk/by-id/scsi-3600188b04c02f100181ab3a48417e490-part3 #xe host-list uuid ( RO) : a019d93e-4d84-4a4b-91e3-23572b5bd8a4 name-label ( RW): xenserver-scribfourteen name-description ( RW): Default install of XenServer #pvscan PV /dev/sda3 VG VG_XenStorage-405a2ece-d10e-d6c5-ede2-e1ad2c29c68d lvm2 [1.81 TB / 204.85 GB free] Total: 1 [1.81 TB] / in use: 1 [1.81 TB] / in no VG: 0 [0 ] #vgscan Reading all physical volumes. This may take a while... Found volume group "VG_XenStorage-405a2ece-d10e-d6c5-ede2-e1ad2c29c68d" using metadata type lvm2 # pvdisplay --- Physical volume --- PV Name /dev/sda3 VG Name VG_XenStorage-405a2ece-d10e-d6c5-ede2-e1ad2c29c68d PV Size 1.81 TB / not usable 6.97 MB Allocatable yes PE Size (KByte) 4096 Total PE 474747 Free PE 52441 Allocated PE 422306 PV UUID U03Gt9-WtHi-8Nnu-QB2Q-c7BV-CO9A-cFpYWW # xe sr-introduce name-label="VMs" type=lvm uuid=U03Gt9-WtHi-8Nnu-QB2Q-c7BV-CO9A-cFpYWW name-description="VMs Local HD Storage" content-type=user shared=false device-config=:device=/dev/disk/by-id/scsi-3600188b04c02f100181ab3a483f9f0ae-part3 U03Gt9-WtHi-8Nnu-QB2Q-c7BV-CO9A-cFpYWW # xe pbd-create host-uuid=a019d93e-4d84-4a4b-91e3-23572b5bd8a4 sr-uuid=U03Gt9-WtHi-8Nnu-QB2Q-c7BV-CO9A-cFpYWW device-config:device=/dev/disk/by-id/scsi-3600188b04c02f100181ab3a483f9f0ae-part3 adf92b7f-ad40-828f-0728-caf94d2a0ba1 # xe pbd-plug uuid=adf92b7f-ad40-828f-0728-caf94d2a0ba1 Error code: SR_BACKEND_FAILURE_47 Error parameters: , The SR is not available [opterr=no such volume group: VG_XenStorage-U03Gt9-WtHi-8Nnu-QB2Q-c7BV-CO9A-cFpYWW] At this point I did a # vgrename VG_XenStorage-405a2ece-d10e-d6c5-ede2-e1ad2c29c68d VG_XenStorage-U03Gt9-WtHi-8Nnu-QB2Q-c7BV-CO9A-cFpYWW cause the VG name was different, but pdb-plug still gives me the same error. So, now I'm kinda lost about what to do, I'm not used to Xen and most sites I've been finding are really unhelpful. I hope someone can guide me in the right way to fix this. I cant lose those VMs (got backups, but from inside the guests, not the VMs themselves).

    Read the article

  • Cacti: "An internal Net-Snmp error condition detected in Cacti snmp_count"

    - by Recc
    There's the odd forum topic about an error similarly obscure as this, but I haven't seen any for snmp_count in particular. Also I don't see graphing problems, though I can't simply go and eyeball all graphs. However the poller does time out and has to be stopped by its internal process preventing overruns. If I filter out the flood of this error in the log I dont get anything else except the poller timeout: 06/12/2014 12:48:00 PM - POLLER: Poller[0] Maximum runtime of 58 seconds exceeded. Exiting. 06/12/2014 12:48:00 PM - SYSTEM STATS: Time:58.8566 Method:spine Processes:1 Threads:40 Hosts:1923 HostsPerProcess:1923 DataSources:61584 RRDsProcessed:0 06/12/2014 12:48:00 PM - SPINE: Poller[0] ERROR: Spine Timed Out While Processing Hosts Internal I saw in the running processes /usr/local/spine/spine 0 2053 that's always left behind. When I kill it the flooding of the error stops. Of course it's the same on the next poll run as it goes through the devices. 2053 is apparently the DB ID for a device. I deleted it completely to see if that stops it. It doesn't, instead 2052 is seen there. I suspect It'll be the same if I keep deleting devices which I will not do. This started happening midday when I wasn't doing anything to the cacti server. I have tried reducing Maximum Threads per Process to 1 and Number of PHP Script Servers to 1. I've been running it at 10 script servers / 40 threads for months with poll cycle time of about 20 sec. I just found out Running snmpwalk on any host would begin returning the values but then timeout halfway through. This doesn't happen from different servers on the network this Cacti is suggesting still that it's a problem with it locally. Any suggestions? For one polling cycle I changed to use cmd.php instead. then I started getting errors like CMDPHP: Poller[0] Host[45] DS[541] WARNING: Result from SNMP not valid. Partial Result: U Perhaps as expected. Looking closely I see that every snmpwalk I do is interrupted at the same place as if some byte limit is hit and the connection torn down.

    Read the article

  • BIOS upgrade lowers CPU temperature

    - by N.N.
    Setup I've got a system with an Asus P8Z68-V PRO motherboard and an Intel Core i7-2600K CPU running at stock speed (no overlocking) which I cool with a Noctua NH-U12P. On the heatsink I've got the two included fans connected via the included Low-Noise Adapters (L.N.A.) 1100 RPM, 16.9 dB(A). In the BIOS settings I've set the CPU and chassis fan profile to silent. Issue Yesterday I upgraded from BIOS version 0501 to 0606. After the upgrade I checked the temperatures in the BIOS monitor and was surprised to see that the CPU temperature was slightly ~30°C. Before the upgrade the CPU temperature was ~50°C with the same BIOS settings (see the following heading for details on temperatures). How can this be? It seems a bit odd that a BIOS upgrade can lower the CPU temperature by 20°C and it also seems odd that the CPU temperature is lower than the chassis temperature. Temperatures When I've checked temperatures the room temperature has been ~23°C. I haven't changed the placement of the computer nor the hardware or cooling setup between BIOS versions. BIOS version 0501 BIOS monitor: CPU: ~50°C Chassis: ~33°C I haven't got any temperature measures from lm-sensors or the like for version 0501 because I only discovered the issue after upgrading to version 0606 and the BIOS updater utility won't let me downgrade to version 0501 (it says "outdated image" when I try to load version 0501). BIOS version 0606 BIOS monitor: CPU: ~30°C Chassis: ~33°C lm-sensors in Ubuntu 11.04 Desktop 64-bit (sudo sensors after an uptime of 4 h 52 min and a load average of 0.22, 0.18, 0.15): coretemp-isa-0000 Adapter: ISA adapter Core 0: +32.0°C (high = +80.0°C, crit = +98.0°C) coretemp-isa-0001 Adapter: ISA adapter Core 1: +35.0°C (high = +80.0°C, crit = +98.0°C) coretemp-isa-0002 Adapter: ISA adapter Core 2: +29.0°C (high = +80.0°C, crit = +98.0°C) coretemp-isa-0003 Adapter: ISA adapter Core 3: +36.0°C (high = +80.0°C, crit = +98.0°C) The BIOS monitor temperatures was checked directly after the lm-sensors temperatures was checked. BIOS version 0706, 0801, 1101 and 3203 I get the same kind of temperatures both in the BIOS monitor and with lm-sensors in BIOS version 0706, 0801, 1101 and 3203 as in 0606. Information from Asus The 0606 changelog mentions nothing explicitly about CPU temperature (but item 3., as indicated by sidran32, might affect temperatures): P8Z68-V PRO 0606 BIOS with IRST 10.6.0.1002 Enable the support of Intel Rapid Storage Technology version 10.6.0.1002 Release Improve DRAM compatibility Improve System stability Improve compatibility with some Raid card model Increase IGD share memory size to 512MB However the following FAQ might give a hint: FAQs I find that the CPU temperature reading in BIOS is about 10~20 degrees centigrade hotter than the reading in OS. Is it normal? Page Tools Solution That is normal as BIOS does not send idle command to the CPU, making most of the power saving features useless. You should be getting similar reading if you disable EIST/C1E/CPU C3 Report/CPU C6 Report in BIOS.

    Read the article

  • Can't install Perl DBD module on Ubuntu (for Bugzilla)

    - by Mara
    Trying to install bugzilla-4.2.2 on Ubuntu 12.04. When I run checksetup.pl I get the following error: YOU MUST RUN ONE OF THE FOLLOWING COMMANDS (depending on which database you use): PostgreSQL: /usr/bin/perl install-module.pl DBD::Pg MySQL: /usr/bin/perl install-module.pl DBD::mysql SQLite: /usr/bin/perl install-module.pl DBD::SQLite Oracle: /usr/bin/perl install-module.pl DBD::Oracle To attempt an automatic install of every required and optional module with one command, do: /usr/bin/perl install-module.pl --all I have installed MySQL via XAMPP so I run: /urs/bin/perl install-module.pl DBD::mysql And get the following error: perl Makefile.PL --testuser=username Can't exec "mysql_config": No such file or directory at Makefile.PL line 479. Can't find mysql_config. Use --mysql_config option to specify where mysql_config is located Can't exec "mysql_config": No such file or directory at Makefile.PL line 479. Can't find mysql_config. Use --mysql_config option to specify where mysql_config is located Can't exec "mysql_config": No such file or directory at Makefile.PL line 479. Can't find mysql_config. Use --mysql_config option to specify where mysql_config is located Failed to determine directory of mysql.h. Use perl Makefile.PL --cflags=-I<dir> to set this directory. For details see the INSTALL.html file, section "C Compiler flags" or type perl Makefile.PL --help Warning: No success on command[/usr/bin/perl Makefile.PL LIB="/opt/lampp/htdocs/bugzilla/4.2.2/bugzilla-4.2.2/lib" INSTALLMAN1DIR="/opt/lampp/htdocs/bugzilla/4.2.2/bugzilla-4.2.2/lib/man/man1" INSTALLMAN3DIR="/opt/lampp/htdocs/bugzilla/4.2.2/bugzilla-4.2.2/lib/man/man3" INSTALLBIN="/opt/lampp/htdocs/bugzilla/4.2.2/bugzilla-4.2.2/lib/bin" INSTALLSCRIPT="/opt/lampp/htdocs/bugzilla/4.2.2/bugzilla-4.2.2/lib/bin" INSTALLDIRS=perl] CAPTTOFU/DBD-mysql-4.021.tar.gz /usr/bin/perl Makefile.PL LIB="/opt/lampp/htdocs/bugzilla/4.2.2/bugzilla-4.2.2/lib" INSTALLMAN1DIR="/opt/lampp/htdocs/bugzilla/4.2.2/bugzilla-4.2.2/lib/man/man1" INSTALLMAN3DIR="/opt/lampp/htdocs/bugzilla/4.2.2/bugzilla-4.2.2/lib/man/man3" INSTALLBIN="/opt/lampp/htdocs/bugzilla/4.2.2/bugzilla-4.2.2/lib/bin" INSTALLSCRIPT="/opt/lampp/htdocs/bugzilla/4.2.2/bugzilla-4.2.2/lib/bin" INSTALLDIRS=perl -- NOT OK Skipping test because of notest pragma Running make install Make had some problems, won't install Could not read metadata file. Falling back to other methods to determine prerequisites So then I tried checksetup.pl's suggestion and ran: /usr/bin/perl install-module.pl --all And it seems to have installed DBD::SQLite without any problems, but again I see a warning that says it's skipping tests because of notest pragma. When I re-run checksetup.pl It shows 3 of the 4 original DB drivers in the "not found" list: PostgreSQL: /usr/bin/perl install-module.pl DBD::Pg MySQL: /usr/bin/perl install-module.pl DBD::mysql Oracle: /usr/bin/perl install-module.pl DBD::Oracle So running it with --all seems to have installed the SQLite driver without any problems, but for some reason I can't seem to install the MySQL driver. Again I need MySQL because thats what XAMPP uses and because I prefer MySQL regardless. I have a feeling it has something to do with this notest pragma error. Any ideas? Thanks in advance!

    Read the article

  • Best available technology for layered disk cache in linux

    - by SpliFF
    I've just bought a 6-core Phenom with 16G of RAM. I use it primarily for compiling and video encoding (and occassional web/db). I'm finding all activities get disk-bound and I just can't keep all 6 cores fed. I'm buying an SSD raid to sit between the HDD and tmpfs. I want to setup a "layered" filesystem where reads are cached on tmpfs but writes safely go through to the SSD. I want files (or blocks) that haven't been read lately on the SSD to then be written back to a HDD using a compressed FS or block layer. So basically reads: - Check tmpfs - Check SSD - Check HD And writes: - Straight to SSD (for safety), then tmpfs (for speed) And periodically, or when space gets low: - Move least frequently accessed files down one layer. I've seen a few projects of interest. CacheFS, cachefsd, bcache seem pretty close but I'm having trouble determining which are practical. bcache seems a little risky (early adoption), cachefs seems tied to specific network filesystems. There are "union" projects unionfs and aufs that let you mount filesystems over each other (USB device over a DVD usually) but both are distributed as a patch and I get the impression this sort of "transparent" mounting was going to become a kernel feature rather than a FS. I know the kernel has a built-in disk cache but it doesn't seem to work well with compiling. I see a 20x speed improvement when I move my source files to tmpfs. I think it's because the standard buffers are dedicated to a specific process and compiling creates and destroys thousands of processes during a build (just guessing there). It looks like I really want those files precached. I've read tmpfs can use virtual memory. In that case is it practical to create a giant tmpfs with swap on the SSD? I don't need to boot off the resulting layered filesystem. I can load grub, kernel and initrd from elsewhere if needed. So that's the background. The question has several components I guess: Recommended FS and/or block layer for the SSD and compressed HDD. Recommended mkfs parameters (block size, options etc...) Recommended cache/mount technology to bind the layers transparently Required mount parameters Required kernel options / patches, etc..

    Read the article

  • Excel 2007 VBA macros don't work in Parallels

    - by MindModel
    I've got a complex Excel spreadsheet I need to use at work. My colleagues use the spreadsheet on Windows PC's, with no special configuration required. I want to run it on a MacBook Pro running Snow Leopard. The spreadsheet contains VBA macros which connect to external Oracle db's over the Internet. If I understand correctly, Excel on the Mac doesn't run VBA macros, so I have to use Parallels. I installed Parallels on the Mac and it's running correctly, as far as I can tell. I installed Excel 2007 under Parallels. I can open the Excel spreadsheet in Parallels and click buttons in the spreadsheet to run macros, but the macros fail with compiler errors. I don't have the password to the source code for the VBA macros, and if possible, I don't want to dig in to the code at that level. I know that there are quite a few things that could go wrong, and examining the VBA code might help, but I'm hoping to solve the problem without going down that road. The spreadsheet runs without any special configuration on Windows, so I'm wondering if anyone out there knows of any limitations of Excel VBA macros under Parallels, or anything else I could do to get this spreadsheet working. It's the only thing that's keeping me from using this MacBook Pro at work. Here is the error message: Compile error in hidden module: clsXXXXx0020Toolx0020Ser. This error commonly occurs when code is incompatible with the version, platform, or architecture of this application. Click Help for more info. Compile error in hidden module: A protected module contains a compilation error. Because the error is in a protected module it cannot be displayed. This error commonly occurs when code is incompatible with the version or architecture of this application (for example, code in a document targets 32-bit Microsoft Office applications but it is attempting to run on 64-bit Office). This error has the following cause and solution: Cause of the error: The error is raised when a compilation error exists in the VBA code inside a protected (hidden) module. The specific compilation error is not exposed because the module is protected. Possible solutions: If you have access to the VBA code in the document or project, unprotect the module, and then run the code again to view the specific error. If you do not have access to the VBA code in the document, then contact the document author to have the code in the hidden module updated.

    Read the article

  • csync2 ERROR: Connection to remote host failed

    - by Emil Salama
    I was unable to find any articles to answer this question, so my best bet was to post this here: Scenario We have 2x application servers in production hosting a PHP website and I would like some folders to be syncronized between the 2, the same was setup for the development environment with no issues, I've followed all instructions from the URL "http://www.cloudedify.com/synchronising-files-in-cloud-with-csync2/", I still seem to have the same result, firewall has been disabled on both boxes for troubeshooting purposes: Config Files: cysnc2.cfg nossl * *; group production { host server1; host server2; key /etc/csync-production-group.key; include /etc/httpd/sites-available; include /xxxxxx/public_html/files include /xxxxxxx/magento/media/catalog/product include /xxxxxxx/magento/media/brands exclude *.log; exclude /xxxx/public_html/file/cache; exclude /xxxxx/public_html/magento/var/cache; exclude /xxxx/public_html/logs; exclude /xxxxx/public_html/magento/var/log; backup-directory /data/sync-conflicts/; backup-generations 2; auto younger; } /etc/xinetd.d/csync2 csync2.cfg service csync2 { disable = no flags = REUSE socket_type = stream wait = no user = root group = root server = /usr/sbin/csync2 server_args = -i -D /data/sync-db/ port = 30865 type = UNLISTED log_type = FILE /data/logs/csync2/csync2-xinetd.log log_on_failure += USERID } I've made sure that the daemon is listening on both server on port 30865 and the keys matched on both servers I've run a tcpdump on each server, output as follows: 12:20:31.366771 IP server1.49919 server2.csync2: Flags [S], seq 445156159, win 14600, options [mss 1460,sackOK,TS val 794864936 ecr 0,nop,wscale 7], length 0 12:20:31.366810 IP server2.csync2 server1.49919: Flags [S.], seq 450593575, ack 445156160, win 14480, options [mss 1460,sackOK,TS val 794798911 ecr 794864936,nop,wscale 7], length 0 12:20:31.367101 IP server1.49919 server2.csync2: Flags [.], ack 1, win 115, options [nop,nop,TS val 794864937 ecr 794798911], length 0 12:20:31.367138 IP server1.49919 server2.csync2: Flags [P.], seq 1:9, ack 1, win 115, options [nop,nop,TS val 794864937 ecr 794798911], length 8 12:20:31.367147 IP server2.csync2 server1.49919: Flags [.], ack 9, win 114, options [nop,nop,TS val 794798912 ecr 794864937], length 0 12:20:31.368625 IP server2.csync2 server1.49919: Flags [R.], seq 1, ack 9, win 114, options [nop,nop,TS val 794798913 ecr 794864937], length 0 Is there anything else i'm missing or should be doing?

    Read the article

  • How to tell if a freebsd jail is up to date?

    - by Martin Torhage
    I've set up a "Service Jail" in FreeBSD 8.0 according to the FreeBSD Handbook (http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/jails-application.html). After upgrading the host to the latest patch level and then performed a jail-upgrade, freebsd-fetch still reports that there are files in need of an update in the jail. Is this expected? Then how do I know if a jail is up to date? This is what I've done in more detail: After the initial setup of the jail freebsd-update fetch reported that there were no updates available neither in the host system nor in the jail. This was expected. A while later freebsd-update fetch reported that the following files where in need of an update both in the host and in the jail. /usr/lib/libssl.a /usr/lib/libssl_p.a /usr/lib/libzpool.a /usr/lib32/libssl.a /usr/lib32/libssl_p.a /usr/lib32/libzpool.a I updated the host and followed the upgrade guide for the jail (http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/jails-application.html#JAILS-SERVICE-JAILS-UPGRADING). freebsd-update fetch now reports that there are no updates available in the host but the following is the output from freebsd-update fetch in the jail: [root@bb /]# freebsd-update fetch Looking up update.FreeBSD.org mirrors... 3 mirrors found. Fetching metadata signature for 8.0-RELEASE from update5.FreeBSD.org... done. Fetching metadata index... done. Inspecting system... done. Preparing to download files... done. The following files are affected by updates, but no changes have been downloaded because the files have been modified locally: /var/db/mergemaster.mtree The following files will be updated as part of updating to 8.0-RELEASE-p2: /usr/lib/libssl.a /usr/lib/libssl_p.a /usr/lib/libzpool.a /usr/lib32/libssl.a /usr/lib32/libssl_p.a /usr/lib32/libzpool.a Shouldn't freebsd-update know that the jail is up to date or have I failed upgrading it? How am I supposed to know if a jail is up to date if freebsd-update can't tell? I'm sure I ran make cleandir twice before make buildworld. TIA

    Read the article

  • freebsd-update reports an upgraded jail as not upgraded

    - by Martin Torhage
    I've set up a "Service Jail" in FreeBSD 8.0 according to the FreeBSD Handbook (http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/jails-application.html). After upgrading the host to the latest patch level and then performed a jail-upgrade, freebsd-fetch still reports that there are files in need of an update in the jail. Is this expected? Then how do I know if a jail is up to date? This is what I've done in more detail: After the initial setup of the jail freebsd-update fetch reported that there were no updates available neither in the host system nor in the jail. This was expected. A while later freebsd-update fetch reported that the following files where in need of an update both in the host and in the jail. /usr/lib/libssl.a /usr/lib/libssl_p.a /usr/lib/libzpool.a /usr/lib32/libssl.a /usr/lib32/libssl_p.a /usr/lib32/libzpool.a I updated the host and followed the upgrade guide for the jail (http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/jails-application.html#JAILS-SERVICE-JAILS-UPGRADING). freebsd-update fetch now reports that there are no updates available in the host but the following is the output from freebsd-update fetch in the jail: [root@bb /]# freebsd-update fetch Looking up update.FreeBSD.org mirrors... 3 mirrors found. Fetching metadata signature for 8.0-RELEASE from update5.FreeBSD.org... done. Fetching metadata index... done. Inspecting system... done. Preparing to download files... done. The following files are affected by updates, but no changes have been downloaded because the files have been modified locally: /var/db/mergemaster.mtree The following files will be updated as part of updating to 8.0-RELEASE-p2: /usr/lib/libssl.a /usr/lib/libssl_p.a /usr/lib/libzpool.a /usr/lib32/libssl.a /usr/lib32/libssl_p.a /usr/lib32/libzpool.a Shouldn't freebsd-update know that the jail is up to date or have I failed upgrading it? How am I supposed to know if a jail is up to date if freebsd-update can't tell? I'm sure I ran make cleandir twice before make buildworld. TIA

    Read the article

  • VirtualBox guest responds to ping but all ports closed in nmap

    - by jeremyjjbrown
    I want to setup a test database on a vm for development purposes but I cannot connect to the server via the network. I've got Ubuntu 12.04vm installed on 12.04 host in Virtualbox 4.2.4 set to - Bridged network mode - Promiscuous Allow All When I try to ping the virtual guest from any network client I get the expected result. PING 192.168.1.209 (192.168.1.209) 56(84) bytes of data. 64 bytes from 192.168.1.209: icmp_req=1 ttl=64 time=0.427 ms ... Internet access inside the vm is normal But when I nmap it I get nothin! jeremy@bangkok:~$ nmap -sV -p 1-65535 192.168.1.209 Starting Nmap 5.21 ( http://nmap.org ) at 2012-11-15 18:39 CST Nmap scan report for jeremy (192.168.1.209) Host is up (0.0032s latency). All 65535 scanned ports on jeremy (192.168.1.209) are closed Service detection performed. Please report any incorrect results at http://nmap.org/submit/ Nmap done: 1 IP address (1 host up) scanned in 0.88 seconds ufw and iptables on VM... jeremy@jeremy:~$ sudo service ufw stop [sudo] password for jeremy: ufw stop/waiting jeremy@jeremy:~$ sudo iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination I have scanned around and have no reason to believe that my router is blocking internal ports. jeremy@bangkok:~$ nmap -v 192.168.1.2 Starting Nmap 5.21 ( http://nmap.org ) at 2012-11-15 18:44 CST Initiating Ping Scan at 18:44 Scanning 192.168.1.2 [2 ports] Completed Ping Scan at 18:44, 0.00s elapsed (1 total hosts) Initiating Parallel DNS resolution of 1 host. at 18:44 Completed Parallel DNS resolution of 1 host. at 18:44, 0.03s elapsed Initiating Connect Scan at 18:44 Scanning 192.168.1.2 [1000 ports] Discovered open port 445/tcp on 192.168.1.2 Discovered open port 139/tcp on 192.168.1.2 Discovered open port 3306/tcp on 192.168.1.2 Discovered open port 80/tcp on 192.168.1.2 Discovered open port 111/tcp on 192.168.1.2 Discovered open port 53/tcp on 192.168.1.2 Discovered open port 5902/tcp on 192.168.1.2 Discovered open port 8090/tcp on 192.168.1.2 Discovered open port 6881/tcp on 192.168.1.2 Completed Connect Scan at 18:44, 0.02s elapsed (1000 total ports) Nmap scan report for 192.168.1.2 Host is up (0.0017s latency). Not shown: 991 closed ports PORT STATE SERVICE 53/tcp open domain 80/tcp open http 111/tcp open rpcbind 139/tcp open netbios-ssn 445/tcp open microsoft-ds 3306/tcp open mysql 5902/tcp open vnc-2 6881/tcp open bittorrent-tracker 8090/tcp open unknown Read data files from: /usr/share/nmap Nmap done: 1 IP address (1 host up) scanned in 0.08 seconds Answer... Turns out all of the ports were open to the network. I installed open ssh and confirmed it. Then I edited my db conf to listen to external IP's and all was well.

    Read the article

  • Local, Multiple-Blog (ie Dashboard) Blogging Software as Alternative to Blogger [closed]

    - by Synetech inc.
    FOR RE-OPENING: I don’t see how it is “too localized”. Plenty of people like to run their own web-apps instead of relying on third-party services. If that were not true, then WordPress, phpBB, Apache, PHP, etc. would not be available for general use. As for “Internet audience at large”, I must have missed the part where it was a rule that you are only allowed to ask for help for things that applies to everyone else too; I thought you were allowed to ask for help. Besides, if someone knows of software that fulfills the question, then it is relevant to whomever would download it, and so is not only applicable to an “extraordinarily narrow situation”. (Besides, the reason that I was asking was because Google had announced that it was discontinuing FTP support for Blogger and so many people were affected—read NOT TOO LOCALIZED—and were trying to find alternatives.) Hi, I am trying to find software (for Windows, PHP, MySQL/SQLite/flat, free, open-source) to localize all of my software and service so that I can keep my files and host when needed from my own system instead of some remote computer. I’ve already selected things like web, FTP, and db servers. I’ve chosen forum and wiki software, as well as an RCS system. At this point, all I’m still looking for—actually, I still need to choose bug-tracking software, but besides that—is blogging software. I still use Blogger and am trying to find something that I can use to import my Blogger stuff and store on (and publish to) my home system. I have read of various blogging software including WordPress, MovableType, and TextPattern. The problem is that I am trying to find something that is like Blogger (which from what I can tell is not available on Google Code as open-source). What I specifically need is multiple-blog support. That is, multiple blogs ala the Blogger Dashboard, not multiple user accounts (although that is important as well). The closest thing that I have been able to find is using Wordpress categories to simulate multiple blogs, but that’s not really what I want. I want software that I can run locally that has a multi-blog dashboard like Blogger. Any ideas? Thanks a lot!

    Read the article

  • How can you know what is w3wp.exe doing? (or how to diagnose a performance problem)

    - by Daniel Magliola
    I'm having a performance problem in a site we've made, and I'm not exactly sure how to start diagnosing it. The short description is: We have a very small site (http://hearablog.com) with very little traffic, in a crappy dedicated server, CPU is always very high, sometimes it stays at 100% for minutes, and w3wp.exe is taking most of it. A typical scenario is w3wp.exe takes 60%, and SQL Server takes about 30%. Our DB is pretty small too. Long description and more details: The site is hosted in a very crappy server by Cari.Net. From the beginning we had the feeling that the server didn't quite behave correctly, like some things would take just too long, so this could be a configuration problem from the get go. It may also be that we are getting a virtual server while we're supposed to have a dedicated one, although we have no evidence that'd indicate this, except for the fact that the server tends to be quite slow. The server is Windows 2008 Standard 64-bit, with SQL 2008 Express Hardware is a Celeron 2.80 GHz, 1Gb RAM The website is developed in ASP.Net MVC, using Entity Framework for data access. Now, this is pretty crappy hardware, but i've had other servers with these guys, with equivalent (or worse) HW, and performance is much better than this one. That said, the other servers have W2003 and SQL2005, and I'm using ASP.Net "WebForms" 2.0, no MVC, no LINQ, no EF; so I'm not sure whether going to 2008 / the other stuff means a big performance penalty is expected. I'm serving MP3 files (5-20 Mb) regularly, which is a slightly unusual load, maybe that is causing some kind of problems? Would that cause w3wp to use a lot of CPU? Disk usage seems very low. Memory is usually around 90%, but disk usage seems to indicate it's not paging much. I get tons of e-mails every day about SQL timeouts, for queries taking over 30 seconds, although all our queries are pretty straightforward (or should be, but EF may be screwing it up). This is what resource monitor looks like in one of these "sprints" of 100% CPU, in case there's anything useful there. And a snapshot of some performance counters: Now, what confuses me very much is that CPU usage of w3wp is just so high. It shouldn't be doing much really... So my questions are... Is there any way of finding out "what" it is doing? Maybe even profile it? Any performance counters I should be looking at? Is this to be expected given this hardware/software configuration? Is this could be cause by some kind of configuration failure, where would you start looking? Thank you VERY much. Daniel Magliola

    Read the article

  • All the Gear and No Idea: Suggestions for re-designing my home/office/entertainment network

    - by 5arx
    Help/ Advice/ Suggestions please: I have a load of kit that I love but which currently operate in disconnected, sometimes counter-productive way. Because I never really had a masterplan I just added these things one after another and connected them up in ad hoc ways. Since I bought my Macbook I've found I spend much less time on the MacPro that was until then my main machine. Perversely, as my job involves writing .Net software, I spend a lot of Mac time actually inside a Windows 7 VM. I stream media from the HP box to the PS3 and thus to the TV, but its not without its limitations/annoyances. We listen to each other's iTunes libraries but the music files are all over the place and it would be good to know they were all safely in one location (and fully backed up). I need to come up with a strategy that will allow me to use all the kit for work, play (recording live music, making tunes, iMovie work), pushing/streaming media to the TV and sharing files with my other half (she uses a Windows laptop and her iPod touch). Ideally I'd like to be able to work on any of the machines and have a shared homedrive that was visible to all machines so all my current files were synced up wherever i was. It would be great if I could access everything securely and quickly over the web. I'd also like to be able to set up a background backup process. The kit list thus far: Apple MacPro 8GB/3x250GB RAID0 + 1TB Apple MacBook Pro 13" 8GB/250GB - I spend a lot of my work time on a Windows 7 VM on this. Crappy Acer laptop (for children's use - iPlayer, watching movies/tv files) HP Proliant Server 4GB/80GB+160GB+300GB Sun Ultra 10 2 x 80GB (old, but in top-notch condition) PS3 160GB iPod Classic 2 x 8GB iPod Touch Observations: Part of the problem is our dual use of Windows and OS X - we can't go for a pure NT style roaming profile. Because the server is also used for hosting test/beta applications and a SQL Server db, it can't be dedicated to file serving. The two Macs really could do with sharing a roaming profile or similar. I'd love to be able to do something useful with the Ultra 10. My other half has been trying to throw it away for over five years now and regularly ask what function it serves in my study :-( I've got no shortage of 500GB external USB hard drives iMovie files are very large and ideally would be processed on a RAID system. Apple's TimeMachine isn't so great. If anyone could suggest all or part of a setup that would fulfil some of my requirements I'd be very grateful. I am willing to consider purchasing one or two more bits of kit (an Apple TV and a Squeezebox have been moted by friends) if they will help make efficiencies rather than add to the chaos and confusion. Thanks for looking.

    Read the article

  • autocomplete not working on one sever, works on others

    - by dogmatic69
    I have Ubuntu 10.10 x64 and x86 running on various servers and auto complete works on all of them bar one. The issue: apt-<tab> would show a list of options but sudo apt-<tab> would not. After fiddling with it for a few hours i've found that /etc/bash_autocomplete did not exist. on the broken server. Copying the one from a working one it now works. but still not properly. sudo apt-get ins<tab> does not show do anything. listing the files in /etc/bash_autocomplete.d/ on the working server has about 50 files, and the broken one only two or three. i dont think that i can just copy these files though as it might show commands for things that are not even installed. TL;DR autocomplete broken, how can i fix it. Seems like its disabled somewhere, why is this EDIT: Ok, it was not ever installed... $ sudo apt-get install bash-completion Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed bash-completion 0 upgraded, 1 newly installed, 0 to remove and 3 not upgraded. Need to get 140kB of archives. After this operation, 1,061kB of additional disk space will be used. Get:1 http://archive.ubuntu.com/ubuntu/ maverick-updates/main bash-completion all 1:1.2-2ubuntu1.1 [140kB] Fetched 140kB in 0s (174kB/s) Selecting previously deselected package bash-completion. (Reading database ... 23808 files and directories currently installed.) Unpacking bash-completion (from .../bash-completion_1%3a1.2-2ubuntu1.1_all.deb) ... Processing triggers for man-db ... Setting up bash-completion (1:1.2-2ubuntu1.1) ... its now kinda working, but still wonky... apt-get ins<tab> gives sudo apt-get insserv as the option. also apt-get install php5<tab> gives apt-get install php5/ not php5-* options.

    Read the article

  • Issues with Server 2012 using DFSR running on Hyper-V 2012

    - by Bryan
    We have a number of Server 2012 systems, all of which run virtualised on Hyper-V 2012 server. We are having problems with two such virtual instances, both of which are used as file servers, whereby they occasionally stop responding to requests to serve files to clients. After logging on to the server, attempts to shut it down gracefully fail (no error, it just fails to acknowledge a shutdown request). Recovery is a case of power cycling the server(s) from the Hyper-V console. These two servers don't server a large number of users (one serves no more than 6 users, and the other serves around 20 users), they are in the same domain, but on different physical hardware (and at different sites). They don't lock up at the same time. They both use DFSR to replicate a fairly large amount of data between themselves (200GB) over ADSL connections, this is working fine, and we have been using DFSR to do this on the previous two generations of server OS we have used (Server 2008 R2 and Server 2003 - both of which were physical installs however). Today, when one of the servers crashed, I noticed an entry in the event log, which looked similar to the following: Log Name: Application Source: ESENT Date: 27/11/2012 10:25:55 Event ID: 533 Task Category: General Level: Warning Keywords: Classic User: N/A Computer: HAL-FS-01.example.com Description: DFSRs (1500) \\.\E:\System Volume Information\DFSR\database_C8CC_101_CC00_EC0E\ dfsr.db: A request to write to the file "\\.\E:\System Volume Information\ DFSR\database_C8CC_101_CC00_EC0E\fsr.log" at offset 4423680 (0x0000000000438000) for 4096 (0x00001000) bytes has not completed for 36 second(s). This problem is likely due to faulty hardware. Please contact your hardware vendor for further assistance diagnosing the problem. When the server started up again, I went to find the event log entry to investigate further and found that the event log entry was no longer there (I assume it was in memory but failed to write to disk before the server was powered off, for the reason mentioned in the message). I found the above message by searching back further in the event log. Both of these virtual servers have their E: volumes fully allocated as opposed to dynamically expanding, and there are no other issues on any of the other virtual servers (which include server 2012, server 2008 R2 and Ubuntu 12.04 x64). There are no signs of IO, memory or CPU starvation on the host systems. I've used performance counters on the affected virtual servers to monitor memory usage (including non paged pool usage), as well as CPU and network utilisation, and none of these show any signs of trouble when the issue arises. I would have thought our configuration isn't that uncommon, so I'm wondering if anyone else has seen this, and managed to resolve the problem?

    Read the article

  • Snmpd update interface counters slowly or something like this

    - by Korjavin Ivan
    I update one my freebsd box to 9-stable (totally new installation) and install net-snmp for monitoring. uname -r 9.1-PRERELEASE pkg_info net-snmp-5.7.1_7 Information for net-snmp-5.7.1_7: Comment: An extendable SNMP implementation .... cat /var/db/ports/net-snmp/options # This file is auto-generated by 'make config'. # Options for net-snmp-5.7.1_7 _OPTIONS_READ=net-snmp-5.7.1_7 _FILE_COMPLETE_OPTIONS_LIST= IPV6 MFD_REWRITES PERL PERL_EMBEDDED PYTHON DUMMY TKMIB DMALLOC MYSQL AX_SOCKONLY UNPRIVILEGED OPTIONS_FILE_UNSET+=IPV6 OPTIONS_FILE_UNSET+=MFD_REWRITES OPTIONS_FILE_SET+=PERL OPTIONS_FILE_SET+=PERL_EMBEDDED OPTIONS_FILE_UNSET+=PYTHON OPTIONS_FILE_SET+=DUMMY OPTIONS_FILE_UNSET+=TKMIB OPTIONS_FILE_SET+=DMALLOC OPTIONS_FILE_UNSET+=MYSQL OPTIONS_FILE_UNSET+=AX_SOCKONLY OPTIONS_FILE_UNSET+=UNPRIVILEGED I have about 500 vlan on this machine, and collect info about interface through snmpd to 2 different software, zabbix and cacti. And both of them plot the graphs with blank fields. I tryed change polling time in zabbix, from 15, sec to 30,60,90,120,10. And anyway i have blank fields. snmpd.conf is empty - only a access controls. This configuration worked fine on freebsd 8. Where is my fault? How fix this graphs? UPD: Changing pooling time, switch off one of agent, doesnt help. I look at zabbix log (recieved data from snmpd) and see that: sorry for russian locale, just look at numbers: and thats is not true, as my "iftop" show speed was about 90Mbits, but snmpd return 2Mbits. I understand that snmpd doesnt return speed, it return just a counter. But how its possible? why 2Mbit/s ? I tryed recompile snmpd with 64-bit counters, and without it. In both variants this blank fields present. So i think its my OS (freebsd) doesnt update interface counters well. I still collect tcpdump for found this request/response. But have problem with that, to much trash. UPD2: I decrypt tcpdump-ed file, and public this as google doc at gdocfile Timediff looks strange.. Like zabbix sometimes "forget" do request, and then do twice at row, ehh UPD3: I parse log from command "while true; do netstat -bin -I vlan4008 /var/log/netstat; sleep 300; done" and load as google docs, and add formula for speed : link Looks like all counters in OS are good. Now i think problem in : 1. zabbix get request twice at row (and what about cacti) 2. snmpd use counter32

    Read the article

  • SQL Server High Availability - Mirroring with MSCS?

    - by David
    I'm looking at options for high-availability for my SQL Server-powered application. The requirements are: HA protection from storage failure. Data accessibility when one of the DB servers is undergoing software updates (e.g. planned outage for Windows Update / SQL Server service-packs). Must not involve much in the way of hardware procurement. The application is an ASP.NET web application. The web application's users have their own database instances. I've seen two main options: SQL Server failover clustering, and SQL Server mirroring. I understand that SQL Server Failover Clustering requires the purchasing of a shared disk array and doesn't offer any protection if the shared storage goes down (so the documentation recommends to set up a Mirroring between two clusters). Database Mirroring seems the cheaper option (as it only requires two database servers and a simple witness box) - but I've heard it doesn't work well when you have a large number of databases. The application I'm developing involves giving each client their own database for their application - there could be hundreds of databases. Setting up the mirroring is no problem thanks to the automation systems we have in place. My final point concerns how failover works with respect to client connections - SQL Server Failover Clustering uses MSCS which means that the cluster is invisible to clients - a connection attempt might fail during the failover, but a simple reconnect will have it working again. However mirroring, as far as I know, requires that the client be aware of the mirrored partners: if the client cannot connect to the primary server then it tries the secondary server. I'm wondering how this work with respect to Connection Pooling in ASP.NET applications - does the client connection failovering mean that there's a potential 2-second (assuming 2000ms TCP timeout policy) pause when the connection pool tries the primary server on every connection attempt? I read somewhere that Mirroring can be used on top of MSCS which means that the client does not need to be aware of mirroring (so there wouldn't be any potential delays during connection, and also that no changes would need to be made to the client, not even the connection string) - however I'm finding it hard to get documentation or white papers on this approach. But if true, then it means the best method is then Mirroring (for HA) with MSCS (for client ignorance and connection performance). ...but how does this scale to a server instance that might contain hundreds of mirrored databases?

    Read the article

  • Why is 50.22.53.71 hitting my localhost node.js in an attempt to find a php setup

    - by laggingreflex
    I just created a new app using angular-fullstack yeoman generator, edited it a bit to my liking, and ran it with grunt on my localhost, and immediately upon starting up I get this flood of requests to paths that I haven't even defined. Is this a hacking attempt? And if so, how does the hacker (human or bot) immediately know where my server is and when it came online? Note that I haven't made anything online, it's just a localhost setup and I'm merely connected to the internet. (Although my router does allow 80 port incoming.) Whois shows that the IP address belongs to a SoftLayer Technologies. Never heard of it. Express server listening on 80, in development mode GET / [200] | 127.0.0.1 (Chrome 31.0.1650) GET /w00tw00t.at.blackhats.romanian.anti-sec:) [404] | 50.22.53.71 (Other) GET /scripts/setup.php [404] | 50.22.53.71 (Other) GET /admin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /admin/pma/scripts/setup.php [404] | 50.22.53.71 (Other) GET /admin/phpmyadmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /db/scripts/setup.php [404] | 50.22.53.71 (Other) GET /dbadmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /myadmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /mysql/scripts/setup.php [404] | 50.22.53.71 (Other) GET /mysqladmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /typo3/phpmyadmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /phpadmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /phpMyAdmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /phpmyadmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /phpmyadmin1/scripts/setup.php [404] | 50.22.53.71 (Other) GET /phpmyadmin2/scripts/setup.php [404] | 50.22.53.71 (Other) GET /pma/scripts/setup.php [404] | 50.22.53.71 (Other) GET /web/phpMyAdmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /xampp/phpmyadmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /web/scripts/setup.php [404] | 50.22.53.71 (Other) GET /php-my-admin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /websql/scripts/setup.php [404] | 50.22.53.71 (Other) GET /phpmyadmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /phpMyAdmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /phpMyAdmin-2/scripts/setup.php [404] | 50.22.53.71 (Other) GET /php-my-admin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /phpMyAdmin-2.5.5/index.php [404] | 50.22.53.71 (Other) GET /phpMyAdmin-2.5.5-pl1/index.php [404] | 50.22.53.71 (Other) GET /phpMyAdmin/ [404] | 50.22.53.71 (Other) GET /phpmyadmin/ [404] | 50.22.53.71 (Other) GET /mysqladmin/ [404] | 50.22.53.71 (Other)

    Read the article

  • PHP & MySQL on Mac OS X: Access denied for GUI user

    - by Eirik Lillebo
    Hey! This question was first posted to Stack Overflow, but as it is perhaps just as much a server issue I though it might be just as well to post it here also. I have just installed and configured Apache, MySQL, PHP and phpMyAdmin on my Macbook in order to have a local development environment. But after I moved one of my projects over to the local server I get a weird MySQL error from one of my calls to mysql_query(): Access denied for user '_securityagent'@'localhost' (using password: NO) First of all, the query I'm sending to MySQL is all valid, and I've even testet it through phpMyAdmin with perfect result. Secondly, the error message only happens here while I have at least 4 other mysql connections and queries per page. This call to mysql_query() happens at the end of a really long function that handles data for newly created or modified articles. This basically what it does: Collect all the data from article form (title, content, dates, etc..) Validate collected data Connect to database Dynamically build SQL query based on validated article data Send query to database before closing the connection Pretty basic, I know. I did not recognize the username "_securityagent" so after a quick search I came across this from and article at Apple's Developer Connection talking about some random bug: Mac OS X's security infrastructure gets around this problem by running its GUI code as a special user, "_securityagent". Then I tried put a var_dump() on all variables used in the mysql_connect() call, and every time it returns the correct values (where username is not "_securityagent" of course). Thus I'm wondering if anyone has any idea why 'securityagent' is trying to connect to my database - and how I can keep this error from occurring when I call mysql_query(). Update: Here is the exact code I'm using to connect to the database. But a little explanation must follow: The connection error happens at a call to mysql_query() in function X in class_1 class_1 uses class_2 to connect to database class_2 reads a config file with the database connection variables (host, user, pass, db) class_2 connect to the database through the following function: var $SYSTEM_DB_HOST = ""; function connect_db() { // Reads the config file include('system_config.php'); if (!($SYSTEM_DB_HOST == "")) { mysql_connect($SYSTEM_DB_HOST, $SYSTEM_DB_USER, $SYSTEM_DB_PASS); @mysql_select_db($SYSTEM_DB); return true; } else { return false; } }

    Read the article

  • Benchmark MySQL Cluster using flexAsynch: No free node id found for mysqld(API)?

    - by quanta
    I am going to benchmark MySQL Cluster using flexAsynch follow this guide, details as below: mkdir /usr/local/mysqlc732/ cd /usr/local/src/mysql-cluster-gpl-7.3.2 cmake . -DCMAKE_INSTALL_PREFIX=/usr/local/mysqlc732/ -DWITH_NDB_TEST=ON make make install Everything works fine until this step: # /usr/local/mysqlc732/bin/flexAsynch -t 1 -p 80 -l 2 -o 100 -c 100 -n FLEXASYNCH - Starting normal mode Perform benchmark of insert, update and delete transactions 1 number of concurrent threads 80 number of parallel operation per thread 100 transaction(s) per round 2 iterations Load Factor is 80% 25 attributes per table 1 is the number of 32 bit words per attribute Tables are with logging Transactions are executed with hint provided No force send is used, adaptive algorithm used Key Errors are disallowed Temporary Resource Errors are allowed Insufficient Space Errors are disallowed Node Recovery Errors are allowed Overload Errors are allowed Timeout Errors are allowed Internal NDB Errors are allowed User logic reported Errors are allowed Application Errors are disallowed Using table name TAB0 NDBT_ProgramExit: 1 - Failed ndb_cluster.log: WARNING -- Failed to allocate nodeid for API at 127.0.0.1. Returned eror: 'No free node id found for mysqld(API).' I also have recompiled with -DWITH_DEBUG=1 -DWITH_NDB_DEBUG=1. How can I run flexAsynch in the debug mode? # /usr/local/mysqlc732/bin/flexAsynch -h FLEXASYNCH Perform benchmark of insert, update and delete transactions Arguments: -t Number of threads to start, default 1 -p Number of parallel transactions per thread, default 32 -o Number of transactions per loop, default 500 -l Number of loops to run, default 1, 0=infinite -load_factor Number Load factor in index in percent (40 -> 99) -a Number of attributes, default 25 -c Number of operations per transaction -s Size of each attribute, default 1 (PK is always of size 1, independent of this value) -simple Use simple read to read from database -dirty Use dirty read to read from database -write Use writeTuple in insert and update -n Use standard table names -no_table_create Don't create tables in db -temp Create table(s) without logging -no_hint Don't give hint on where to execute transaction coordinator -adaptive Use adaptive send algorithm (default) -force Force send when communicating -non_adaptive Send at a 10 millisecond interval -local 1 = each thread its own node, 2 = round robin on node per parallel trans 3 = random node per parallel trans -ndbrecord Use NDB Record -r Number of extra loops -insert Only run inserts on standard table -read Only run reads on standard table -update Only run updates on standard table -delete Only run deletes on standard table -create_table Only run Create Table of standard table -drop_table Only run Drop Table on standard table -warmup_time Warmup Time before measurement starts -execution_time Execution Time where measurement is done -cooldown_time Cooldown time after measurement completed -table Number of standard table, default 0

    Read the article

  • Excessive CPU Utilization for Bind 9.8.1 `named` processes

    - by justinzane
    I just noticed that named is eating vast amounts of CPU time for a very small network with only a few domains. Can someone help me determine what is misconfigured, please? Or how to debug this. top top - 14:13:08 up 25 days, 14:16, 1 user, load average: 1.04, 1.04, 1.05 Tasks: 149 total, 1 running, 148 sleeping, 0 stopped, 0 zombie %Cpu(s): 17.3 us, 4.3 sy, 0.0 ni, 78.2 id, 0.1 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem: 2042776 total, 1347916 used, 694860 free, 249396 buffers KiB Swap: 3976080 total, 30552 used, 3945528 free, 574164 cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 17445 bind 20 0 244m 42m 3124 S 99.4 2.2 2345:03 named rndc stats +++ Statistics Dump +++ (1352931389) ++ Incoming Requests ++ 65869 QUERY ++ Incoming Queries ++ 31809 A 241 NS 3 CNAME 27455 SOA 276 PTR 123 MX 462 TXT 5400 AAAA 7 A6 1 DS 14 DNSKEY 15 SPF 55 AXFR 8 ANY ++ Outgoing Queries ++ [View: internal] 22206 A 509 NS 10 SOA 25 PTR 12 MX 524 TXT 4851 AAAA 62 DNSKEY 19 SPF 3157 DLV [View: external] 87 A 2 NS 80 AAAA 120 DNSKEY 7 DLV [View: _bind] ++ Name Server Statistics ++ 65869 IPv4 requests received 27670 requests with EDNS(0) received 112 TCP requests received 65652 responses sent 20 truncated responses sent 27670 responses with EDNS(0) sent 62920 queries resulted in successful answer 37117 queries resulted in authoritative answer 28482 queries resulted in non authoritative answer 7 queries resulted in referral answer 591 queries resulted in nxrrset 53 queries resulted in SERVFAIL 2081 queries resulted in NXDOMAIN 14530 queries caused recursion 162 duplicate queries received 55 requested transfers completed ++ Zone Maintenance Statistics ++ 109536 IPv4 notifies sent ++ Resolver Statistics ++ [Common] [View: internal] 29362 IPv4 queries sent 2013 IPv6 queries sent 28531 IPv4 responses received 4209 NXDOMAIN received 6 SERVFAIL received 31 FORMERR received 32 EDNS(0) query failures 3359 query retries 836 query timeouts 5348 IPv4 NS address fetches 3271 IPv6 NS address fetches 83 IPv4 NS address fetch failed 2779 IPv6 NS address fetch failed 17421 DNSSEC validation attempted 12731 DNSSEC validation succeeded 4690 DNSSEC NX validation succeeded 21104 queries with RTT 10-100ms 7418 queries with RTT 100-500ms 3 queries with RTT 500-800ms 1 queries with RTT 800-1600ms [View: external] 192 IPv4 queries sent 104 IPv6 queries sent 192 IPv4 responses received 2 NXDOMAIN received 104 query retries 44 IPv4 NS address fetches 44 IPv6 NS address fetches 1 IPv4 NS address fetch failed 1 IPv6 NS address fetch failed 4 DNSSEC validation attempted 3 DNSSEC validation succeeded 1 DNSSEC NX validation succeeded 152 queries with RTT 10-100ms 40 queries with RTT 100-500ms [View: _bind] ++ Cache DB RRsets ++ [View: internal (Cache: internal)] 2007 A 652 NS 131 CNAME 1 MX 32 TXT 421 AAAA 28 DS 244 RRSIG 110 NSEC 3 DNSKEY 2 !A 2 !TXT 89 !AAAA 2 !SPF 14 !DLV 148 NXDOMAIN [View: external (Cache: external)] 55 A 12 NS 34 AAAA 2 DS 10 RRSIG 1 DNSKEY [View: _bind (Cache: _bind)] ++ Socket I/O Statistics ++ 82958 UDP/IPv4 sockets opened 2118 UDP/IPv6 sockets opened 4 TCP/IPv4 sockets opened 1 TCP/IPv6 sockets opened 82956 UDP/IPv4 sockets closed 2117 UDP/IPv6 sockets closed 58 TCP/IPv4 sockets closed 15 UDP/IPv4 socket bind failures 2117 UDP/IPv6 socket connect failures 29554 UDP/IPv4 connections established 59 TCP/IPv4 connections accepted 2117 UDP/IPv6 send errors 5 UDP/IPv4 recv errors ++ Per Zone Query Statistics ++ --- Statistics Dump --- (1352931389)

    Read the article

  • Can't upgrade my Ubuntu server, it gets stuck on openjdk-6-jre-headless

    - by Jean-Nicolas Boulay Desjardins
    I am using Ubuntu Server. When I do: apt-get upgrade it gets stuck on: Setting up openjdk-6-jre-headless (6b20-1.9.7-0ubuntu1) ... Why? And what can I do to stop it? I tried removing it with apt-get... I get this error: E: dpkg was interrupted, you must manually run 'sudo dpkg --configure -a' to correct the problem. So then I tried this: dpkg --purge openjdk-6-jre-headless I got this: dpkg: dependency problems prevent removal of openjdk-6-jre-headless: openjdk-6-jre-lib depends on openjdk-6-jre-headless (>= 6b17). ca-certificates-java depends on openjdk-6-jre-headless (>= 6b16-1.6.1-2) | java6-runtime-headless; however: Package openjdk-6-jre-headless is to be removed. Package java6-runtime-headless is not installed. Package openjdk-6-jre-headless which provides java6-runtime-headless is to be removed. ca-certificates-java depends on openjdk-6-jre-headless (>= 6b16-1.6.1-2) | java6-runtime-headless; however: Package openjdk-6-jre-headless is to be removed. Package java6-runtime-headless is not installed. Package openjdk-6-jre-headless which provides java6-runtime-headless is to be removed. dpkg: error processing openjdk-6-jre-headless (--purge): dependency problems - not removing Errors were encountered while processing: openjdk-6-jre-headless The thing is I think my DB is using it... Not sure... I am using Cassandra with Thrift... Yes, it's getting a bit more complex... # dpkg --configure -a I get: dpkg: dependency problems prevent configuration of openjdk-6-jre: openjdk-6-jre depends on openjdk-6-jre-headless (>= 6b20-1.9.7-0ubuntu1); however: Package openjdk-6-jre-headless is not configured yet. dpkg: error processing openjdk-6-jre (--configure): dependency problems - leaving unconfigured Processing triggers for libc-bin ... ldconfig deferred processing now taking place dpkg: dependency problems prevent configuration of libaccess-bridge-java: libaccess-bridge-java depends on default-jre | openjdk-6-jre | sun-java6-jre; however: Package default-jre is not installed. Package openjdk-6-jre is not configured yet. Package sun-java6-jre is not installed. dpkg: error processing libaccess-bridge-java (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of icedtea-6-jre-cacao: icedtea-6-jre-cacao depends on openjdk-6-jre-headless (= 6b20-1.9.7-0ubuntu1); however: Package openjdk-6-jre-headless is not configured yet. dpkg: error processing icedtea-6-jre-cacao (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libaccess-bridge-java-jni: libaccess-bridge-java-jni depends on libaccess-bridge-java (>= 1.26.2-5); however: Package libaccess-bridge-java is not configured yet. dpkg: error processing libaccess-bridge-java-jni (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: openjdk-6-jre libaccess-bridge-java icedtea-6-jre-cacao libaccess-bridge-java-jni Thanks again for any help.

    Read the article

< Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >