Search Results

Search found 20852 results on 835 pages for 'intellij idea'.

Page 591/835 | < Previous Page | 587 588 589 590 591 592 593 594 595 596 597 598  | Next Page >

  • IIS 7.5 stops serving requests for no apparent reason

    - by Steffen
    We're running a site on 4 virtual Win 2008 R2 64 bit servers. This is the only site on the IIS, and we use Windows Network Load Balancing to share the load between our 4 virtual servers. We've used these virtual servers for approximately a week, and we're starting to see some issues. For no apparent reason the IIS stops serving pages, and doesn't even respond with an error. So upon requesting a page from the server, the browser just waits infinitely (or until it decides to give up clientside) Sometimes an iisreset fixes the issue, other times we have to reboot the entire virtual server. There are no traces in the eventlog of why this happens, and there's no traces in our applications exception log neither. Furthermore this happens even when there's a very small load on the server, so it doesn't seem to be because it's flooded with requests. So frankly I'm at a loss here - I have no idea where to start debugging this issue :-( I'm quite certain we never had these issues on our physical servers, however they were running Win 2003 32bit, so there are quite a few differences between them and the virtual ones. (Which obviously makes it difficult to tell what exactly causes this)

    Read the article

  • Storing changes to multiple databases in a single centralized database

    - by B4x
    The setup: multiple MySQL databases at different locations with the same scheme. The databases are in production. The motivation: we want to present information in these databases in a web interface, clearly showing which database the row originated from. We want to be able to get this data from one single source (for different reasons, one of them is pagination which gets tricky if you use multiple sources). The problem: how do we collect data from multiple databases, storing it at a central location and clearly marking the origin of each row? We have discussed using a centralized DB that tracks changes to the production DBs, with the same schema and one additional column for origin. If possible, we would like to avoid having to make changes in the production environment. Since we can't use MySQL's replication (multiple masters to a single slave isn't allowed), what are our other options? Are there any existing solutions for something like this or do we have to code something ourselves? Is the best solution to change the database schemas in production and add a column for origin? The idea of a centralized database isn't set in stone. If there is a solution to this that solves our other problems without a centralized DB, we can be flexible. Any help is much appreciated.

    Read the article

  • Creating Active Directory on an EC2 box

    - by Chiggins
    So I have Active Directory set up on a Windows Server 2008 Amazon EC2 server. Its set up correctly I think, I never got any errors with it. Just to test that I got it all set up correctly, I have a Windows 7 Professional virtual machine set up on my network to join to AD. I set the VM to use the Active Directory box as its DNS server. I type in my domain to join it, but I get the following error: DNS was successfully queried for the service location (SRV) resource record used to locate a domain controller for domain "ad.win.chigs.me": The query was for the SRV record for _ldap._tcp.dc._msdcs.ad.win.chigs.me The following domain controllers were identified by the query: ip-0af92ac4.ad.win.chigs.me However no domain controllers could be contacted. Common causes of this error include: - Host (A) or (AAAA) records that map the names of the domain controllers to their IP addresses are missing or contain incorrect addresses. - Domain controllers registered in DNS are not connected to the network or are not running. It seems that I can talk to Active Directory, but when I'm trying to contact the Domain Controller, its giving a private IP to connect to, at least thats what I can make out of it. Here are some nslookup results. > win.chigs.me Server: ec2-184-73-35-150.compute-1.amazonaws.com Address: 184.73.35.150 Non-authoritative answer: Name: ec2-184-73-35-150.compute-1.amazonaws.com Address: 10.249.42.196 Aliases: win.chigs.me > ad.win.chigs.me Server: ec2-184-73-35-150.compute-1.amazonaws.com Address: 184.73.35.150 Name: ad.win.chigs.me Address: 10.249.42.196 win.chigs.me and ad.win.chigs.me are CNAME's pointing to my EC2 box. Any idea what I need to do so that I can join my virtual machine to the EC2 Active Directory set up I have? Thanks!

    Read the article

  • Configuring CakePHP on Hostgator

    - by yaeger
    I have absolutely no idea what I am doing wrong here. I have followed just about every guide there is with installing cakephp on shared hosting and I am still having problems. I have also started over each time when following a guide. Maybe someone can help me out here as I am out of options. Here is my current setup: / app webroot vendors lib cake public_html .htaccess index.php plugins I have configured the index.php file in the public_html to point to the correct files. I have also done this in the index.php file located in webroot folder. I am getting an Internal 500 server error and it says to check my logs for what the error specifically is. However there are no logs being generated. I removed the .htaccess file from the public_html folder and I get the following errors: Warning: require(/app/webroot/index.php) [function.require]: failed to open stream: No such file or directory in /home/user/public_html/index.php on line 40 Fatal error: require() [function.require]: Failed opening required '/app/webroot/index.php' (include_path='.:/usr/lib/php:/usr/local/lib/php') in /home/user/public_html/index.php on line 40 line 40 is require APP_DIR . DS . WEBROOT_DIR . DS . 'index.php'; DS = "/" WEBROOT_DIR = "app" Anyone have any suggestions? I am at lost at what I am doing wrong.

    Read the article

  • Puppet inventory service using puppetdb

    - by Oli
    I have 3 servers set up. A puppet master using passenger (puppet-server1), dashboard using passenger (puppet-server2) and puppetdb (puppet-server3). I cannot get the inventory service working in the dashboard. The puppet master is able to sign certs and hand out manifests. The nodes have checked in to the dashboard ok The puppetdb appears to be working - logs files as follows: 2012-12-13 17:53:10,899 INFO [command-proc-74] [puppetdb.command] [8490148f-865a-45c8-b5b5-2c8824d753dd] [replace facts] puppet-server3.test.net 2012-12-13 17:53:11,041 INFO [command-proc-74] [puppetdb.command] [dfcc5168-06df-41d4-9a97-77b4cd3f4a2b] [replace catalog] puppet-server3.test.net 2012-12-13 17:55:28,600 INFO [command-proc-74] [puppetdb.command] [b2cc0a96-0404-49f5-96ad-19c778508d3d] [replace facts] puppet-client2.test.net 2012-12-13 17:55:28,729 INFO [command-proc-74] [puppetdb.command] [4dc4b8f3-06df-4dad-a89a-92ac80447b99] [replace catalog] puppet-client2.test.net The puppet master has the following configured in puppet.conf [master] certname = puppet-server1.test.net storeconfigs = true storeconfigs_backend = puppetdb reports = store, http reporturl = http://puppet-server2.test.net/reports/upload The puppet master have the following configured in auth.conf #access for puppet dashboard facts path /facts auth yes method find, search allow dashboard The puppet dashboard has this configured in /usr/share/puppet-dashboard/config/settings.yml # Hostname of the inventory server. inventory_server: 'puppet-server3.test.net' # Port for the inventory server. inventory_port: 8081 The inventory is on as I see a link to the inventory in the dashboard server But I am getting this error: Inventory Could not retrieve facts from inventory service: SSL_connect SYSCALL returned=5 errno=0 state=SSLv3 read finished A clearly an SSL error - but I have followed the documentation and have no idea how to fix this. Can anyone help please? Oli

    Read the article

  • VMWare Fusion cannot connect to the NAT connection on my Mac

    - by FFish
    I have been using VMWare Fusion on my Mac to check out my websites on localhost. Now I can't connect anymore with the NAT connection. There seems to be a problem with my IP address or Mac address? I have no idea what causes this, it was working fine before!? In the XP (SP2) VM, in the taskbar I see the Local Area Connection with the yellow warning icon. The bubble says: "This connection has limited or no connectivity. You might not be aisle to access the Internet or some network resources. For more information, click this message." Doing that opens up the Local Area Connection Status panel. In the Support tab, when I click the repair button I get following message: "Windows could not finish repairing the problem because the following action cannot be completed: Renewing IP address." I tried disabling my firewall and also XAMPP that I use as server on OSX. VMWware version: 3.1 VM: XP SP2 Mac OSX 10.6.3 Any help would be greatly appreciated.

    Read the article

  • Is browser based wireless authentication secure?

    - by johnnyb10
    Our wireless network previously used a preshared WPA/WPA2 key for guest access, which allows them access to the Internet. (Our employee access uses 802.1x authentication). We just had a wireless consultant come in to fix various wireless issues we had; one of the things he wound up doing was changing our guest access to HTML-based instead of the preshared key. So now that guest SSID is open (instead of using WPA) and users are presented with a browser-based login screen before they can get on the Internet. My question is: Is this an acceptable method from a security standpoint? I would assume that having an open network is necessarily a bad idea, but the consultant said that the traffic is still using PEAP, so it's secure. I didn't get a chance to question him further on this because we ran late and a bunch of other things came up. Please let me know what you think about the advantages/disadvantages of using HTML-based wireless authentication as opposed to using a preshared WPA key. Thanks...

    Read the article

  • Can't connect to Synology DiskStation through HTTPS when using Windows 7 Import

    - by LeonidasFett
    a little background to my problem: I have a Synology DiskStation 213j that I use as a backup/data storage solution. When I'm at work, I would like to push and pull files from my DiskStation but I can't use VPN which is forbidden for outgoing connections. So I wanted to try to use HTTPS so I can at least connect securely to the web interface. I mostly use Chrome which uses the Windows Certificate Store. So I tried importing a self-signed certificate into it, without success. I still get a warning in Chrome telling me the connection is not secure because it can't be verified. When I import the certificate into Firefox though, it works and I can connect through HTTPS. I checked my domain on this site: http://www.sslshopper.com/ssl-checker.html It shows no errors, only a warning that the certificate is self-signed. Which is OK in this case. Any got any idea why importing the certificate into Windows 7 doesn't work? I tried Right-Click domain.mydomain.de.crt File --> Install certificate --> Next --> both options here (in case of "Place certificate in following store:" I selected "Third Party Root Certificate Authorities") to no avail.

    Read the article

  • Running Debian as guest operating system on a Hyper-V VM

    - by kce
    Hello. Layer-9 considerations are prompting a migration from Citrix XenServer to Hyper-V as our shop's virtualization platform of choice. This will require me to migrate our existing virtual machines from XenServer to Hyper-V. A hand full of these VMs are running Debian. Unfortunately, Debian does not seem to be on the list of approved/supported guest operating systems. In fact it seems that running Debian as a guest operating system of is rather difficult, although apparently not impossible. I have two interrelated questions: Does anyone have any experience running a Debian guest on Hyper-V? Is it one of those things where it just will not work at all or is more along the lines of "it will probably work fine, but we won't support it". Any experience here, positive or negative, would be helpful. How much of a bad idea is it to deviate from Hyper-V's list of supported guest operating systems? Again, is it either basically asking for Bad Things (TM) to happen or is just another instance of "it will probably work fine, but we won't support it"? Or is it somewhere in the middle? Thank you.

    Read the article

  • Yahoo marked my mail as spam and says domainkey fails

    - by mGreet
    Hi Yahoo is marking our mail as spam. We are using PHP Zend framework to send the mail. Mail header says that Domain Key is failed. Authentication-Results: mta160.mail.in.yahoo.com from=mydomain.com; domainkeys=fail (bad sig); from=mydomain.com; dkim=pass (ok) We configured our SMTP server (Same server used to send mail from zend framework.) in outlook and send the mail to yahoo. This time yahoo says domainkeys is pass. Authentication-Results: mta185.mail.in.yahoo.com from=speedgreet.com; domainkeys=pass (ok); from=speedgreet.com; dkim=pass (ok) Domainkey is added in mail header on our server which is used by both outlook client and PHP client. yahoo recognize the mail which is sent from outlook and yahoo does not recognize the mail from PHP client. As far as I know, Signing the email is done on the server side with help of domain key. PHP and Outlook uses the same server to sign the mail. But why yahoo handling differently? What I am missing here? Any Idea? Can anyone help me?

    Read the article

  • MGE UPS Cut power - What happened?

    - by JT.WK
    I have 3 x MGE Pulsar M 3000 2700w UPS units within my server room which have run perfectly up until now. On Saturday morning I noticed that one of these UPS units was no longer outputting power, the lcd displayed a message saying "load not powered" and told me to press the power button to start output. Needless to say that the servers, switches and routers is was supporting were all turned off. I tried pressing and even holding the power button, but the unit refused to start back up again. Only power cycling the unit got it back up again. I have checked the logs on the UPS, although they were useless. Nothing out of the ordinary, and no email notifications had been sent. The output level sits on about 51% and all battery checks are OK. It is now three days on and the UPS is still up and running (although I am scheduling an outage to get it out of there ASAP). Does anyone have any idea what could have gone wrong here? Is there anything else that I can check that could help?

    Read the article

  • Being a more attractive job candidate - Certs XOR Degree

    - by Zephyr Pellerin
    I'm currently working in an IT position, where I do helpdesk stuff, and predominantly security related issues/consulting (In the loosest sense of the term) In-House and for Service-Contract clients (as the only/acting CCSP [I guess I should say only person with Cisco experience] in my organization). I've professionally written Kernel Mode drivers for a gaming company. Among other things that I'm proud to put on a resume. I think of myself as very reasonably qualified as a System Administrator, With excellent Cisco experience, among other things I think would make a good addition to almost any IT staff in need of a new employee. However, Something has always tripped me up - Human Resources. Let me explain, I decided to skip the university route - I'm immensely glad that I did, The computer science graduates that I've met and work with rarely know much of anything about Computers (Until they gain some 'real' experience), Even when asked about Theoretical Computing fundamentals they can rattle something off about Turing Completeness but rarely do they understand the mathematical underpinnings. In short, I think instead of going to college, I'd rather pick up some real world experience. However, Apparently, Employers rarely think the same way. A quick perusal of jobs through the standard job search engine yields nothing short of a conspiracy to exclude anyone without 'A Bachelors Degree in Computer Science or Equivalent'. Interviews I've had in the past have almost always been entangled with - 1. My Age (Which I can't really change) and 2. Lack of Degree. Employers frequently disregard the CCNA/CCSP, The experience I've gained through internships, My extensive experience in x86 assembly and C, among so many other things I like to think are valuable to employers - In lieu of the fact that I don't have a piece of paper. So, AS AN EMPLOYER - Is it even worth working on my CCIE? Or should I pad my resume with certifications that are easier to acquire (Like CISSP, MSCE, Network+, etc.). Or should I ditch the whole idea and head back to get a Mathematics or CS degree?

    Read the article

  • Moving Zend Framework 2 from apache to nginx

    - by Aleksander
    I would like to move site that uses Zend Framework 2 from Apache to Nginx. The problem is that site have 6 modules, and apache handles it by aliases defined in httpd-vhosts.conf, #httpd-vhosts.conf <VirtualHost _default_:443> ServerName localhost:443 Alias /develop/cpanel "C:/webapps/develop/mil_catele_cp/public" Alias /develop/docs/tech "C:/webapps/develop/mil_catele_tech_docs/public" Alias /develop/docs "C:/webapps/develop/mil_catele_docs/public" Alias /develop/auth "C:/webapps/develop/mil_catele_auth/public" Alias /develop "C:/webapps/develop/mil_web_dicom_viewer/public" DocumentRoot "C:/webapps/mil_catele_homepage" </VirtualHost> in httpd.conf DocumentRoot is set to C:/webapps. Sites are avialeble at for example localhost/develop/cpanel. Framework handles further routing. In Nginx I was able to make only one site available by specifing root C:/webapps/develop/mil_catele_tech_docs/public; in server block. It works only because docs module don't depend on auth like others, and site was at localhost/. In next attempt: root C:/webapps; location /develop/auth { root C:/webapps/develop/mil_catele_auth/public; try_files $uri $uri/ /develop/mil_catele_auth/public/index.php$is_args$args; } Now as I enter localhost/develop/cpanel it gets to correct index.php but can't find any resources (css,js files). I have no Idea why reference paths in browswer's GET requsts changed to https://localhost/css/bootstrap.css form https://localhost/develop/auth/css/bootstrap.css as it was on apache. This root directive seems not working. Nginx handles php by using fastCGI location ~ \.(php|phtml)?$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param APPLICATION_ENV production; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } I googled whole day, and found nothing usefull. Can someone help me make this configuration work like on Apache?

    Read the article

  • Which is the fastest way to move 1Petabyte from one storage to a new one?

    - by marc.riera
    First of all, thanks for reading, and sorry for asking something related to my job. I understand that this is something that I should solve by myself but as you will see its something a bit difficult. A small description: Now Storage = 1PB using DDN S2A9900 storage for the OSTs, 4 OSS , 10 GigE network. (lustre 1.6) 100 compute nodes with 2x Infiniband 1 infiniband switch with 36 ports After Storage = Previous storage + another 1PB using DDN S2A 990 or LSI E5400 (still to decide) (lustre 2.0) 8 OSS , 10GigE network 100 compute nodes with 2x Infiniband Previous experience: transfered 120 TB in less than 3 days using following command: tar -C /old --record-size 2048 -b 2048 -cf - dir | tar -C /new --record-size 2048 -b 2048 -xvf - 2>&1 | tee /tmp/dir.log So , big problem here, using big mathematical equations I conclude that we are going to need 1 month to transfer the data from one side to the new one. During this time the researchers will need to step back, and I'm personally not happy with this. I'm telling you that we have infiniband connections because I think that may be there is a chance to use it to transfer the data using 18 compute nodes (18 * 2 IB = 36 ports) to transfer the data from one storage to the other. I'm trying to figure out if the IB switch will handle all the traffic but in case it just burn up will go faster than using 10GigE. Also, having lustre 1.6 and 2.0 agents on same server works quite well, with this there is no need to go by 1.8 to upgrade the metadata servers with two steps. Any ideas? Many thanks Note 1: Zoredache, we can divide it in two blocks (A)600Tb and (B)400Tb. The idea is to move (A) to new storage which is lustre2.0 formated, then format where (A) was with lustre2.0 and move (B) to this lustre2.0 block and extend with the space where (B) was. This way we will end with (A) and (B) on separate filesystems, with 1PB each.

    Read the article

  • Windows server 2008R2 routing with single NIC

    - by Fabian
    I'm trying to duplicate a Linux server configuration to a windows server 2008R2 box. Basicaly this linux server acts as a router, but it is doing its job with only 1 interface (1 NIC). Here is the network configuration in place (I cannot change it) : INTERNET <== Router (local ip = 194.168.0.3) <== linux Server (ip : 194.168.0.2). The router is configured with a DMZ to 194.168.0.2, and only allow this IP to connect to internet (Cannot change this router configuration). The linux server is configured with a default gateway to 194.168.0.3, with the option : "Act as router". All other computer on the lan have this configuration (given by DHCP) : IP range : 194.168.0.X MASK : 255.255.255.0 Default gateway : 194.168.0.2 And everything is working perfectly. I'm trying to reproduce this way of routing with only one NIC from a windows server 2008R2, but it seems that you cannnot do it with only one NIC (all exemples I see are refering to 2 NIC with 2 different network). Does someone have an idea how to achieved this in Windows server 2008R2 ? Tx you for your help ! Fabian.

    Read the article

  • Thundrbird 3: can't change column width?

    - by rumtscho
    I recently installed Thunderbird 3.0.3. Just noticed a suboptimal UI setting: in the upper pane, which lists the e-mails in the current folder, the Date column is about 200px wide. So when I keep the window at 480x600, all I see in a row is: | tree icon | favourites icon | attachment icon | read icon | junk icon | Date and time, followed by 5cm whitespace | ... | P Where "P" is the first letter of the name of the sender. And the "..." is actually shown this way, I have no idea which column it is meant to be. But I don't see neither the sender, nor the message subject, which makes scrolling a folder for a certain mail rather pointless. I see these when I maximize the window, actually the columns are then not only bigger, they are arranged in another sequence. But I feel that holding a mail client permanently maximised at 1600x1200 is a waste of screen real estate. My naive solution attempt was to try to go with the mouse cursor to the right edge of the date column and try to shrink it by moving the cursor left while holding down the left mouse button. Not only is this default behaviour for all resizable columns I've ever encountered in GUIs, the cursor actually turns into a horizontal double-headed arrow. But pulling has no effect at all. I cannot make a wide column narrow, and I cannot make the narrow columns wide. I didn't find anything in the preferences either. So can please somebody explain how to get the columns arranged sensibly?

    Read the article

  • Excel Help: Data Input Help

    - by B-Ballerl
    Everyday I download data from a site that will have rows each filled with individual data for clients. I'm able to input the data into excel as a whole but after that I'm having trouble figuring out how to put it into a chart. For example Web visits time. So say Client 1 stayed for 5 min increasing his total time on the site to 20 min and Client 2 stayed for 0 min keeping his time of 10 min and they were both registered on new years eve, and R1's last login was today and R2's was yesterday. (R for some reason repersents Client, no idea why...). Client 3 hasn't been on since he registered keeping his total at 4 min So my data would look something like this for Today (20110104) R1,20101231,20110104,20 R2,20101231,20110103,10 R3,20101231,20101231,4 And this for the day before (201101030), R1,20101231,20110102,15 R2,20101231,20110103,10 R3,20101231,20101231,4 I get about 200+ client rows each day where even the names of the Client list are changing. Is it possible to import the data each day and fill it in a excel sheet where the Client number is off on the left hand side in a table, and the amount of time (Whole Number ex. 4) each day it spends on the site extend to the right under it's specific date see Picture? I've manage to create a manual sheet but have been unsucessful at getting excel to do any of it for me. Here are two pictures:

    Read the article

  • Multiple threads stuck on Tomcat behind Apache mod_proxy

    - by Eddy
    we just took a break at butting our collective heads against this maddening problem we're having. Basically this brand new deployment of Tomcat 6.0.36 crawls down to a halt every couple minutes with many of the worker threads stuck as in the example snippet; only after a while the server gets "unstuck" for another couple minutes. The previous Tomcat works a charm though, but keeping it is not really an option... On netstat, we also see a lot of FIN_WAIT and FIN2_WAIT. "catalina-exec-25" daemon prio=10 tid=0x000000004f9d4000 nid=0x7459 runnable [0x0000000044567000] java.lang.Thread.State: RUNNABLE at java.net.SocketOutputStream.socketWrite0(Native Method) at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92) at java.net.SocketOutputStream.write(SocketOutputStream.java:136) at org.apache.coyote.http11.InternalOutputBuffer.realWriteBytes(InternalOutputBuffer.java:756) at org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:448) at org.apache.tomcat.util.buf.ByteChunk.append(ByteChunk.java:363) at org.apache.coyote.http11.InternalOutputBuffer$OutputStreamOutputBuffer.doWrite(InternalOutputBuffer.java:780) at org.apache.coyote.http11.filters.IdentityOutputFilter.doWrite(IdentityOutputFilter.java:118) at org.apache.coyote.http11.InternalOutputBuffer.doWrite(InternalOutputBuffer.java:593) at org.apache.coyote.Response.doWrite(Response.java:560) at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:364) at org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:448) at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:318) at org.apache.catalina.connector.OutputBuffer.close(OutputBuffer.java:274) at org.apache.catalina.connector.Response.finishResponse(Response.java:493) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:317) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:861) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:606) at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:396) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Any idea? Eddy

    Read the article

  • Uninstall php5 installed from source

    - by diegomichel
    I have tried to install php5 from source , and it worked... Then for some reason need to install the official packets, so i tried a make uninstall and for my surprise there is such make uninstall... so i tried delete all the installed files by hand. Then installed the official debian packages and it worked fine... till i need install sqlite module, which give me the following error: php --version PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20090626/pdo_sqlite.so' - /usr/lib/php5/20090626/pdo_sqlite.so: undefined symbol: php_pdo_register_driver in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20090626/sqlite.so' - /usr/lib/php5/20090626/sqlite.so: undefined symbol: php_pdo_register_driver in Unknown on line 0 PHP 5.3.1-5 with Suhosin-Patch (cli) (built: Feb 22 2010 22:46:05) Copyright (c) 1997-2009 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2009 Zend Technologies So i remember that manual install i did, and i think there is some old lib installed causing that problem, the bad thing is that there is not such make uninstall on the source code of php5... php-5.2.13 > make uninstall make: *** No rule to make target `uninstall'. Stop. I have tried reinstall and purge all php related packages via aptitude with not success. OS: Debian Squeeze. uname -a Linux desktop 2.6.32-trunk-amd64 #1 SMP Sun Jan 10 22:40:40 UTC 2010 x86_64 GNU/Linux Any idea how to fix that?

    Read the article

  • ESXi - change to thin - virtual disk filesize is the same

    - by sven
    running ESXi 5.5 here with a datastore on a single SSD. Now, I thought about changing to thin disks from thick and found that I could use a tool on the ESXi host to do that. However, the file size of the new created virtual disk is not changing. I run: vmkfstools -i loader.vmdk -d 'thin' thinloader.vmdk Destination disk format: VMFS thin-provisioned Cloning disk 'loader.vmdk'... Clone: 100% done. After that I compared the virtual disksizes: ls -la *.vmdk -rw------- 1 root root 32212254720 Jun 10 08:25 loader-flat.vmdk -rw------- 1 root root 467 May 21 17:04 loader.vmdk -rw------- 1 root root 32212254720 Jun 10 08:27 thinloader-flat.vmdk -rw------- 1 root root 520 Jun 10 08:33 thinloader.vmdk Stats on the original file: stat loader.vmdk File: loader.vmdk Size: 467 Blocks: 0 IO Block: 131072 regular file Device: 8bf64d175e27544ch/10085333178302026828d Inode: 419443780 Links: 1 Access: (0600/-rw-------) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2014-01-25 10:17:34.000000000 Modify: 2014-05-21 17:04:06.000000000 Change: 2014-05-21 17:04:06.000000000 and on the thin file: stat thinloader.vmdk File: thinloader.vmdk Size: 520 Blocks: 0 IO Block: 131072 regular file Device: 8bf64d175e27544ch/10085333178302026828d Inode: 432026692 Links: 1 Access: (0600/-rw-------) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2014-06-10 08:27:45.000000000 Modify: 2014-06-10 08:33:30.000000000 Change: 2014-06-10 08:33:30.000000000 Anyone an idea why the disk is not providing any more space (tried with multiple VM's already - all the same)? Also, I have noticed that the newly created file "autoappend" "-flat" to the disk ... Thanks Sven Update - diff of the vmdk config* --- loader.vmdk +++ thinloader.vmdk @@ -7,15 +7,17 @@ createType="vmfs" -RW 62914560 VMFS "loader-flat.vmdk" +RW 62914560 VMFS "thinloader-flat.vmdk" ddb.adapterType = "lsilogic" +ddb.deletable = "true" ddb.geometry.cylinders = "3916" ddb.geometry.heads = "255" ddb.geometry.sectors = "63" ddb.longContentID = "6d95855805dfa0079327dfee29b48dca" -ddb.uuid = "60 00 C2 98 d5 7d 17 bf-ac 54 70 b1 2d 39 43 d5" +ddb.thinProvisioned = "1" +ddb.uuid = "60 00 C2 93 c4 13 6c cf-bb 7b 34 c9 2c b4 dc 1e" ddb.virtualHWVersion = "8"

    Read the article

  • How to choose the most optimal RAID settings on PE2950

    - by javano
    I have some Dell PowerEdge 2950's with 4x 15k, 150GB Cheetah SAS drives in them. They are going to be VM hosts, CentOS running ESXi with Windows Server 2k8 guests. Some guests will be hosting IIS servers, and others MSSQL servers. I am trying to set the RAID virtual disks settings and can't decide which is more optimal given this situation; Read Policy: Out of Read-Ahead, No-Read-Ahead and Adaptive Read-Ahead, the default is Read-Ahead. I will be making large sequential writes initially, writing out blank images for virtual machine hard drives (lets say 30GBs from /dev/zero for example) so Read-Ahead seems good at first. But within the virtual machines reads could be random from anywhere within their file systems as they are IIS and MSSQL servers, so perhaps No-Read-Ahead is a better idea? Now I think Adaptive Read-Ahead would be better then as a compromise but I don't know much about this option, how does it compare in performance to the others? Write Policy: write-back caching, write-through caching, the default is write-back caching. The default of write-back caching is safer than write-through caching but at a performance expense. My thinking here is that in the event of power loss for example, it seems more likely in my head (this is why I need some clarification!) that damage will occur to a guest VM with write-back caching enabled, so I should favour write-through? I have searched around and there is obviously no definitive answer, so I would like to find out what is best for my situation.

    Read the article

  • Why does MOSS sometimes delete an existing user from a site?

    - by Jesse
    I'm experiencing an issue with a MOSS installation. I am using the Site Settings Permissions to add an Active Directory account as a valid user of a site. This entails validating that the user account name is correct via the 'Check Names' button, then giving them 'Contribute' permissions. Once this is done they appear as a user on the 'All People' page. This works fine and the user is able to access the site. At some point in the future (sometimes several days later) the user account is somehow removed as a valid user from the site. This site resides in a test environment so access is pretty well controlled; which has allowed us to rule out someone else going in and removing the user manually. This appears to be something that is being done by the system itself and we have no idea why. We can manually add the user back, but then it will eventually get removed again later. I have an admittedly limited understanding of SharePoint permissions, but I believe that SharePoint stores valid users in a SQL database and I would assume that when dealing with Active Directory accounts it would be storing the user name and probably the SID. It appears that for some reason this record is later getting deleted out of the database, as the users will suddenly disappear from the "All People" page and will start getting "Access Denied: You are not authorized..." messages when trying to access the site. Has anyone seen this behavior before?

    Read the article

  • Ways to have audio output without wires

    - by viraptor
    I'm trying to find a way of using my home speakers/amp without actually having to connect them. There are two laptops that use them normally (so I don't like changing the connection all the time) and I'd rather move the speakers to a place that's away from the couch. I'm not sure how to do this though... The options I can think of are: some kind of wireless jack-jack connection finally getting a media server Unfortunately I can't find any good product for the first solution. I've seen some headphones which have the receiver integrated and a separate transmitted, so in general the idea is already out there, just not the way I need ;) I've seen also http://www.miccus.com/products/blubridge-mini-jack, but I'd have to have a compatible receiver which I can't find on its own (maybe there's some application that the media server could use?). As far as media server goes... many of the plug servers look really interesting, but I'm not sure how to create an audio output and how to redirect the input really. None of the plug servers I've seen so far advertises the option of audio output jack port. I think this part could be fixed by getting one with an usb port and a separate cheap usb soundcard. I hope that input can be sorted out in some rather simple way. I've got Linux running on both laptops so I hope that would be possible to configure jack/pulse/whatever to use the remote endpoint, or even write a simple local-/dev/dsp:network:media-server-/dev/dsp forwarder. So the main question is... are there better ways? Are there any out of the box solutions? Or maybe this was already done by someone and described somewhere?

    Read the article

  • Per connection bandwidth limit

    - by Kyr
    Apparently, our server box running Windows Server 2008 R2 has a per connection bandwidth limit of 0.2 MB/s. Meaning, while one TCP connection can pull at max 0.2 MB/s, 60 parallel connections can pull 12 MB/s. We first noticed this when trying to checkout large SVN repository from this server. I used a simple Java application to test this, transferring data from server to workstation using variable number of threads (one connection per thread). Server part of the application simply writes 1 MB memory buffer to socket 100 times, so there is no disk involvement. Each connection topped at 0.2 MB/s. Same per connection limit was for only one as was for 60 parallel connections. The problem is that I have no idea from where this limit comes from. I have very little experience administrating Windows Server, so I was mostly trying to find something by googling. I have checked the following: Local Computer Policy QoS Packet Scheduler Limit reservable bandwidth: it's Not configured; Group Policy Management Console: we have two GOPs, but neiher has any Policy-based QoS defined; There isn't any bandwidth limiter program installed, as far as I can tell. We're using standard Windows Firewall. I can update this question with any additional information if needed.

    Read the article

  • My Boot order changed. Why?

    - by Chris
    I have a laptop running Windows XP SP3 with one internal hard drive partitioned into C: (system), D: (storage) and I have an external hard drive, F: (external drive). Yesterday the machine was running fine. Today, I go back to it and see that it's just showing a blinking cursor. Checked through the BIOS and the hard drive checked out fine. CTRL-ALT-DELETED the machine a few times, but I was never able to boot back into the operating system. I threw in a live CD and found out that the boot order of the drives has changed. The external drive is now C:, the system partiton is D:, and the storage partition is E:. Does anyone have any idea of how or why this would have occured? Auto system updates are turned off so there should have been no automatic reboot of the system overnight, and anti-virius runs on the machine and has found no infections before this occured. Edit When I was looking through the BIOS of the machine, I did see that the boot order was changed. But still the same question remains, what would have caused this to happen? I can't believe that a random reboot happened and totally changed my hard drive setup.

    Read the article

< Previous Page | 587 588 589 590 591 592 593 594 595 596 597 598  | Next Page >