Search Results

Search found 25440 results on 1018 pages for 'agent based modeling'.

Page 322/1018 | < Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >

  • Bacula vs. BackupPC [closed]

    - by ujjain
    I have been googling about the differences between them. Bacula has lots of roles BackupPC is easier to configure Bacula works with agent, not rsync (great for Windows backups) It seems that Bacula is most often compared to Amanda though, while BackupPC seems a perfectly lovely and popular backup distribution to. I currently backup my servers with rsnapshot, but I am looking for a professional scalable solution that could also back-up 50 hosts without problems. Preferably a solution that can offer bare metal restores for my Linux servers. I am not looking to reinstall the exact same version of Plesk, the software, etc... Update: I see this ranks high in Google, I found a good article: http://www.serverfocus.org/backuppc-vs-bacula-vs-amanda. I personally think that BackupPC is good for smaller environment, but Bacula, despite the high learning curve, is better for environments that requilre scaling.

    Read the article

  • Can I use wildcards is puppet package ensure to cover multiple releaseversion

    - by Rob van den Eijnde
    Using puppet I want to update packages on my (CentOS 5 & 6 servers) in a controlled way. Therefore I don't want to use ensure=>latest but rather ensure=>3.0.1-1. Example: class puppet::installation inherits puppet { package { "puppet": ensure => "3.0.1-1", } } The update works alright but puppet agent keeps complaining that there is a difference: /Stage[main]/Puppet::Installation/Package[puppet]/ensure: current_value 3.0.1-1.el6, should be 3.0.1-1 (noop) I can solve this by changing the ensure rule to 3.0.1-1.el6 but than that won't work on CentOS 5. Is there a short/clean way to solve this or do I have to write to seperate, os-releaseversion dependant rules. I have been googling for a solution but didn't find anything pertaining to this particular question. Any suggestion or reference to a relevant example would be appreciated.

    Read the article

  • Upgrade TFS 2010 build server to support .net 4.5

    - by JustEngland
    What is needed in the tfs 2010 build agent, to build .net 4.5 projects, in tfs 2008 we had to set the MSBuildPath property, but the configuration seems to be different in 2010. I get the following error message. (614): The imported project "C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v11.0\WebApplications\Microsoft.WebApplication.targets" was not found. Confirm that the path in the declaration is correct, and that the file exists on disk. How we handled it in 2008 http://blogs.msdn.com/b/willbar/archive/2009/11/01/building-net-4-0-applications-using-team-build-2008.aspx

    Read the article

  • Apache stops responding to http requests -- https continues to work

    - by Apropos
    Okay. Very strange problem that I'm having here. I just recently updated to Apache 2.4.2 from 2.2.17, mostly to try to get name-based SSL VirtualHosts working (although they should have been working on 2.2.17). Server is Win2008 R2 (so x64 by definition) running with PHP 5.4.3 and MySQL 5.1.40 (outdated, I know). When I launch the server, it initially works fine. Responds to all requests, VirtualHosts all in order. However, after an uncertain amount of time (appears to only take a few minutes for the most part, but sometimes takes hours), it stops responding to regular HTTP requests (on any VirtualHost). HTTPS continues to work. No errors in the log, and nothing in the access logs when I attempt to connect. I'm having a hard time finding the source of this error given its intermittent nature. When removing all SSL-based VirtualHosts, it seemingly increased stability (still responding to HTTP requests twelve hours later). This could be mere coincidence, though. Entirety of SSL VirtualHost is as follows, should there happen to be a problem with it. <VirtualHost *:443> DocumentRoot "C:\Server\www\virtualhosts\mysite.net" ErrorLog logs/ssl.mysite.net-error_log CustomLog logs/ssl.mysite.net-access_log common env=!dontlog SSLEngine on SSLProtocol all -SSLv2 SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM SSLCertificateFile C:/Server/bin/apache/apache2.4.2/conf/ssl/server.crt SSLCertificateKeyFile C:/Server/bin/apache/apache2.4.2/conf/ssl/server.key SSLCertificateChainFile C:/Server/bin/apache/Apache2.4.2/conf/ssl/sub.class1.server.ca.pem SSLCACertificateFile C:/Server/bin/apache/Apache2.4.2/conf/ssl/ca.pem </VirtualHost> Any ideas what I'm missing?

    Read the article

  • Fixed Resource Monitor Graph Scale in Windows Server 2008 R2

    - by Clever Human
    In Windows Server 2008 R2's Resource Monitor, is there a way to set the scale of the various graphs to be constant values instead of variable based on data? It seems to me that the utility of a graph is to get a quick overview glance at the values those graphs are showing. So if I look at the CPU graph and the line is up near the top, I can know immediately that something is using all my CPU and go investigate what. I don't really care if the CPU is jumping between .01% and 2%. Or if the network usage monitor is up near the top, I will know that all my bandwidth is being used up, and go figure out what. But the way things are now, the graphs are meaningless because the scales constantly shift. If you look at the network usage graph in one second it might have a scale out of 100kbps, and the next second have a scale based on 1mbps! So... is there a registry key or something that will peg the scale of these graphs to logical maximums?

    Read the article

  • Cannot get Backup Exec to backup Exchange.

    - by Shawn Gradwell
    I have a media server, Windows Server 2008 SP2 running Backup Exec 2010 R2. The SQL and other Windows agents work but I cannot backup the Exchange 2010 server running Windows Server2008 R2. I have the correct license for the Exchange agent - installed on the media server, and I installed Exchange Management tools on the media server. The 'Microsoft Exchange Database Availability Group' option is greyed out and if I select the server under a new backup job I can expand the 'Microsoft Information Store' option and see the mail database name but showing 0Kb. When I try to back it up it gives an error displaying: The job failed with the following error: Backup Exec attempted to back up an Exchange database according to the job settings. The database was not found, however. Update the selection list and run the job again.

    Read the article

  • How to diagnose remote assistance problem

    - by cantabilesoftware
    I have a long standing issue with remote assistance between a home and work PC. My wife and I both use MSN messenger and I used to be able to control her PC at home via MSN Remote Assistance. Some time ago however this stopped working and I don't know why. We're both running the latest versions of MSN Live Messenger and I've checked the appropriate firewall ports are open, but it still doesn't work and MSN just says something useless like "The person isn't responding". Any suggestions for how can I diagnose this? More info: I just tried direct Remote Desktop between work PC and home PC and it works fine - so I presume all the appropriate ports are open. Just Remote Assistance doesn't work. I'd like to get RA working so I can demonstrate how to do things remotely. With Remote Desktop the person at the other end gets booted off and can't see. With Remote Assistance they can follow along step by step. Some comments below suggest using other solutions, which is fine and do work, but there must be a way to diagnose RA and get it working. Experimenting with this some more, the notebook that I was using at work today that refused to connect works fine for remote assistance when I bring it home. So I guess this must be a problem with our network configuration at work. I've checked that 3389 is open on firewall on office router and remote desktop works both ways.... just not remote assistance. I've read that remote assitance won't work if client and server are both behind Non-UPnP/NAT routers. If one has UPnP it's supposed to work. Office router doesn't have UPnP enabled but my home one does. I've also scoured the event logs on both ends, nothing noteworthy - unless I'm looking in the wrong spot). Note (copied from comment): I've just tried ShowMyPC which is based on VNC and it works, but I'd still like to figure out what's wrong with RA - it's just bugging me. The question is only about Remote Assistance, no need to propose solutions based on other programs.[/edit by Gnoupi]

    Read the article

  • Getting Dell E6320 with I7 to work with 3 monitors at 1920x1080p x 3

    - by MadBoy
    I want to buy Dell E6320 which comes with Intel Core I7-2620M (2.70GHz, 4MB cache, Dual Core) with Intel HD Graphics 3000. Laptop will come with docking station. I want to connect 3 monitors to that docking station so that working at home would give me some additional boost. Docking station will allow me to connect only 2 monitors so I'm looking at following other options: Matrox TRIPLEHEAD2GO DIGITAL Edition or TRIPLEHEAD2GO DP Edition. But reading Matrox Support Page intel GPU can't run the highest resolution with 3 monitors connected, it even gets worse since it seems monitors would have to be able to work at 50hz. Also I'm not sure but it seems that Matrox doesn't split the monitors as 3 separate monitors but simply as one big space (which is a bit opposite to what I need) Buy 2 or maybe just 1 USB based monitor but it would also mean having 1 or 2 different monitors then the main one, unless I buy 3 USB based monitors which would mean more money to spend. Also I found only couple of models and most of them require USB 3.0 and no other cables to plug in (nice but costly - couldn't find decent monitor with only USB for sending signal and having power connected normally) . But docking station has only one USB 3.0 port. Can I use hub and still get it to work? Find some converters from Digital to USB (I think DisplayLink does some?) Buy different laptop but what kind? I need it to be I7, small (13"), fast and lightweight. At same time it requires docking station that I can use at home to connect 3 external monitors. Some other suggested solution... Edit: I need 3 monitors for work in terms of coding in Visual Studio or having word/excel/outlook open. Nothing fancy. Maybe some movie once in a while.

    Read the article

  • Ubuntu 12.04 suddenly cannot connect to WPA2/WPA Personal protected connection. Windows 7 can

    - by d4ryl3
    I have a laptop with Windows 7 and Ubuntu 12.04. I have a Cisco E1200 and when I set it up, it created 2 SSIDs. Let's name them: MyConnection (WPA/WPA2 personal), and MyConnection-Guest (no authentication, guest password entered via web browser). I had no problem connecting to MyConnection before, either in Windows 7 and Ubuntu. But now, I can't access MyConnection on Ubuntu. It just says "connecting..." then disconnects after a while. But I'm able to access the internet (on Ubuntu) when I connect to MyConnection-Guest. MAC filtering is off (even if it's on its MAC address is in the white list). Any idea why I'm unable to connect to MyConnection in Ubuntu? Thanks. Update: My Ubuntu installation can connect to ANY WiFi connection (WPA/WEP/no auth), except for MyConnection. Update2: This is what "The not so easy way" returned: Initializing interface 'eth1' conf '/etc/wpa_supplicant.conf' driver 'default' ctrl_interface 'N/A' bridge 'N/A' Configuration file '/etc/wpa_supplicant.conf' -> '/etc/wpa_supplicant.conf' Reading configuration file '/etc/wpa_supplicant.conf' Priority group 0 id=0 ssid='MyConnection' id=1 ssid='MyConnection' id=2 ssid='MyConnection' id=3 ssid='MyConnection' WEXT: cfg80211-based driver detected SIOCGIWRANGE: WE(compiled)=22 WE(source)=21 enc_capa=0xf capabilities: key_mgmt 0xf enc 0xf flags 0x0 netlink: Operstate: linkmode=1, operstate=5 Own MAC address: xx:xx:xx:xx:xx:xx wpa_driver_wext_set_key: alg=0 key_idx=0 set_tx=0 seq_len=0 key_len=0 wpa_driver_wext_set_key: alg=0 key_idx=1 set_tx=0 seq_len=0 key_len=0 wpa_driver_wext_set_key: alg=0 key_idx=2 set_tx=0 seq_len=0 key_len=0 wpa_driver_wext_set_key: alg=0 key_idx=3 set_tx=0 seq_len=0 key_len=0 wpa_driver_wext_set_key: alg=0 key_idx=4 set_tx=0 seq_len=0 key_len=0 ioctl[SIOCSIWENCODEEXT]: Invalid argument Driver did not support SIOCSIWENCODEEXT wpa_driver_wext_set_key: alg=0 key_idx=5 set_tx=0 seq_len=0 key_len=0 ioctl[SIOCSIWENCODEEXT]: Invalid argument Driver did not support SIOCSIWENCODEEXT wpa_driver_wext_set_countermeasures RSN: flushing PMKID list in the driver Setting scan request: 0 sec 100000 usec WPS: UUID based on MAC address - hexdump(len=16): 16 3b d8 47 9e 24 50 89 96 16 6d 66 35 f3 58 37 EAPOL: SUPP_PAE entering state DISCONNECTED EAPOL: Supplicant port status: Unauthorized EAPOL: KEY_RX entering state NO_KEY_RECEIVE EAPOL: SUPP_BE entering state INITIALIZE EAP: EAP entering state DISABLED EAPOL: Supplicant port status: Unauthorized EAPOL: Supplicant port status: Unauthorized Added interface eth1

    Read the article

  • Puppet apache module causing 'Error 400 on SERVER: Invalid parameter identifier'

    - by Andy Shinn
    I am receiving the following error when trying to use the latest puppetlabs-apache module from github (https://github.com/puppetlabs/puppetlabs-apache): Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Invalid parameter identifier at /etc/puppet/environments/apache_update/modules/apache/manifests/mod.pp:40 on node zordon.mydomain.com Warning: Not using cache on failed catalog Error: Could not retrieve catalog; skipping run My node config looks like: node 'zordon.mydomain.com' { include template::common include template::puppetagent include template::lamp User::Create sudo::conf { 'joe': priority = 60, content = 'joe ALL=(ALL) NOPASSWD: ALL', require = User::Create['joe'], } } The template::lamp class is what uses apache module: class template::lamp { include myfirewall Firewall Firewall class { 'apache': } class { 'apache::mod::php': } class { 'apache::mod::ssl': } class { 'mysql::server': } } It looks like serverfault markup is getting garbled on Puppet realize statements. The User::Create and Firewall lines are just realizing a user and 2 firewall rules. I have verified that the /var/lib/puppet/lib/puppet/type/a2mod.rb type has the identifier parameter and it is the same MD5 as the server. I am using Puppet 3.0.1 on both agent and master. Any idea what may cause this?

    Read the article

  • SQL 2008 publisher -> SQL 2000 subscriber: Is a pull subscription possible for merge replication?

    - by Brian Dunzweiler
    I am trying to synchronize a SQL 2000 SP4 subscriber to a SQL 2008 publisher via a merge pull subscription. When the subscriber tries to run the merge agent, it fails the following error: The process could not connect to Distributor 'OH05DBS002\SAM_SSG_2008'. SQL Server does not exist or access denied. Has anyone had success with this setup? I was able to create and synchronize a push subscription so I know that communication works between the two, at least from 2008-2000. The lack of communication from 2000-2008 also affects the ability to create a linked server on the SQL 2000 subscriber. One other tidbit - I did install the SQL 2008 native client on the the 2000 box but it didn't help either. Before anyone asks, I can't upgrade the subscriber as it still needs to support replication between MS Access 2003. Yeah, I know. :) TIA, Brian

    Read the article

  • What is the optimum way to secure a company wide wiki?

    - by Mark Robinson
    We have a wiki which is used by over half our company. Generally it has been very positively received. However, there is a concern over security - not letting confidential information fall into the wrong hands (i.e. competitors). The default answer is to create a complicated security matrix defining who can read what document (wiki page) based on who created it. Personally I think this mainly solves the wrong problem because it creates barriers within the company instead of a barrier to the external world. But some are concerned that people at a customer site might share information with a customer which then goes to the competitor. The administration of such a matrix is a nightmare because (1) the matrix is based on department and not projects (this is a matrix organisation), and (2) because in a wiki all pages are by definition dynamic so what is confidential today might not be confidential tomorrow (but the history is always readable!). Apart from the security matrix, we've considered restricting content on the wiki to non super secret stuff, but off course that needs to be monitored. Another solution (the current) is to monitor views and report anything suspicious (e.g. one person at a customer site having 2000 views in two days was reported). Again - this is not ideal because this does not directly imply a wrong motive. Does anyone have a better solution? How can a company wide wiki be made secure and yet keep its low threshold USP? BTW we use MediaWiki with Lockdown to exclude some administrative staff.

    Read the article

  • Symantec CPS / Backup Exec 11D Service stuck in "Starting" Status

    - by user42289
    I have two Windows 2003 (one is SE, one is SBS) both SP2, both are Virtual Machines of Microsoft Virtual Server 2005 R2. All of a sudden about 2 weeks ago, the Symantec Backup Exec / CPS 11D stopped working on them. One is the Media server, one is our Exchange 2003 Server. There is another copy of CPS on our file server that the service is running fine on. However the one that is fine is not a VM. When I say stop working, the "backup exec continuous protection agent" service is stuck in "starting" status. On the non Exchange server I've tried uninstalling the last Windows Updates that were run some time around the time of failure. I've tried repairing the install of CPS. I've tried uninstalling it and reinstalling. Exact same problem in the end.

    Read the article

  • How to find source of 301/302 redirect loop? Heroku GoDaddy Zerigo

    - by user179288
    this should be a relatively simple problem but I'm having trouble.I hope this is the right forum to post on as I've seen people get booted off stack-overflow for this sort of thing. I've setup a web app on heroku (cedar stack) at my-web-app.herokuapp.com and I'm trying to direct my-domain.com and www.my-domain.com to it. As per instructions on the heroku documentation, I've set my-domain.com to redirect (forwarding) to www.my-domain.com and then set a C-Name from www.my-domain.com to my-web-app.herokuapp.com. But the C-Name doesn't seem to be working right and is sending back to my-domain.com, causing a loop and I can't work out why. I first configured these setting at GoDaddy.com where I registered the domain but then tried to avoid the problem by using Heroku's Zerigo DNS add-on, setting the nameservers on GoDaddy to the ones given for Zerigo. However the problem remains. Here is the output from dig for my-domain.com ("drop-circles.com"): ; <<>> DiG 9.3.2 <<>> any drop-circles.com ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 671 ;; flags: qr rd ra; QUERY: 1, ANSWER: 8, AUTHORITY: 0, ADDITIONAL: 5 ;; QUESTION SECTION: ;drop-circles.com. IN ANY ;; ANSWER SECTION: drop-circles.com. 433 IN NS b.ns.zerigo.net. drop-circles.com. 433 IN NS d.ns.zerigo.net. drop-circles.com. 433 IN NS e.ns.zerigo.net. drop-circles.com. 433 IN NS a.ns.zerigo.net. drop-circles.com. 433 IN NS c.ns.zerigo.net. drop-circles.com. 433 IN SOA a.ns.zerigo.net. hostmaster.zerigo.com. 1372250760 10800 3600 604800 900 drop-circles.com. 433 IN A 64.27.57.29 drop-circles.com. 433 IN A 64.27.57.24 ;; ADDITIONAL SECTION: d.ns.zerigo.net. 68935 IN A 174.36.24.250 e.ns.zerigo.net. 69015 IN A 72.26.219.150 a.ns.zerigo.net. 72602 IN A 64.27.57.11 c.ns.zerigo.net. 69204 IN A 109.74.192.232 b.ns.zerigo.net. 70549 IN A 174.37.229.229 ;; Query time: 15 msec ;; SERVER: 194.168.4.100#53(194.168.4.100) ;; WHEN: Wed Jun 26 14:29:07 2013 ;; MSG SIZE rcvd: 293 Here is the output from dig for www.my-domain.com ("www.drop-circles.com"): ; <<>> DiG 9.3.2 <<>> any www.drop-circles.com ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 1608 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;www.drop-circles.com. IN ANY ;; ANSWER SECTION: www.drop-circles.com. 407 IN CNAME drop-circles-website.herokuapp.com. ;; Query time: 19 msec ;; SERVER: 194.168.4.100#53(194.168.4.100) ;; WHEN: Wed Jun 26 14:29:15 2013 ;; MSG SIZE rcvd: 83 And from Fiddler if I use the inspector when I try either address I get a series of requests, with the my-domain.com ("drop-circles.com") looking like this: Request: GET http://drop-circles.com/ HTTP/1.1 Accept: text/html, application/xhtml+xml, */* Accept-Language: en-gb User-Agent: Opera/9.80 (Windows NT 5.1; U; Edition IBIS; Trident/5.0) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: drop-circles.com Response: HTTP/1.1 302 Found Server: nginx/0.8.54 Date: Wed, 26 Jun 2013 13:26:55 GMT Content-Type: text/html;charset=utf-8 Connection: keep-alive Status: 302 Found Location: http://www.drop-circles.com/ Content-Length: 113 <html><body>Redirecting to <a href="http://www.drop-circles.com/">http://www.drop-circles.com/</a></body></html> And the www.my-domain.com ("www.drop-circles.com") looking like this: Request: GET http://www.drop-circles.com/ HTTP/1.1 Accept: text/html, application/xhtml+xml, */* Accept-Language: en-gb User-Agent: Opera/9.80 (Windows NT 5.1; U; Edition IBIS; Trident/5.0) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: www.drop-circles.com Response: HTTP/1.1 301 Moved Permanently Content-Type: text/html Date: Wed, 26 Jun 2013 13:26:56 GMT Location: http://drop-circles.com/ Vary: Accept X-Powered-By: Express Content-Length: 104 Connection: keep-alive <p>Moved Permanently. Redirecting to <a href="http://drop-circles.com/">http://drop-circles.com/</a></p> Any and all help would be greatly appreciated. If it is not at all obvious from these readouts what it might be could someone at least tell me which company GoDaddy, Zerigo or Heroku should I go to for support since I don't really know enough to be able to say where the problem lies. Thank you.

    Read the article

  • SSH attack CentOS Amazon EC2

    - by user37143
    Hi, I run a few Rightscale CentOS AMI based instances on Amazon EC2. Two months back I found that our SSHD security is compromised( I had added host.allow and host.deny for ssh). So I created new instances and done an IP based ssh that allows only our IPs through AWS Firewall(ec2-authorize) and chnaged the ssh 22 default port to some other port but two days back I found I was not able to login to the server and when I tried on 22 port the ssh got connected and I found that sshd_conf was changed and when I tried to edit sshd_config I found root had no write permission on the file. So I tried a chmod and it said access denied for 'root' user. This is very strange. I checked secure log and history and found nothing informative. I have PHP, Ruby On Rails, Java, Wordpress apps running on these server. This time I did a chkrootkit scan and found nothing. I renamed the /etc/ssh folder and reinstalled openssh through yum. I had faced this on 3 instances on CentOS(5.2, 5.4) I have instances on Debian as well those working fine. Is this a CentOS/Rightscale issue. Guys, what security measures I should take to prevent this. Please support me this is very critical. Thanks

    Read the article

  • converting node inheritance to hiera

    - by quickshiftin
    I'm working on moving over a node inheritance tree to hiera. Presently working on the hierarchy. Prior to hiera, my nodes had a hierarchy as such base pre-prod qa nodes staging nodes development nodes prod nodes Now I'm trying to get the same tier with hiera. Starting out I have this :hierarchy: - base - "%{environment}" - "%{clientcert}" but I need another level to capture pre-prod and prod. My thought would be to add an entry to puppet.conf, something like [agent] realm = pre-prod then :hierarchy: - base - "%{realm}" - "%{environment}" - "%{clientcert}" A couple of questions Are you allowed to place arbitrary properties into puppet.conf? Will hiera see the realm property?

    Read the article

  • Bacula vs. BackupPC

    - by chronoz
    I have been googling about the differences between them. Bacula has lots of roles BackupPC is easier to configure Bacula works with agent, not rsync (great for Windows backups) It seems that Bacula is most often compared to Amanda though, while BackupPC seems a perfectly lovely and popular backup distribution to. I currently backup my servers with rsnapshot, but I am looking for a professional scalable solution that could also back-up 50 hosts without problems. Preferably a solution that can offer bare metal restores for my Linux servers. I am not looking to reinstall the exact same version of Plesk, the software, etc...

    Read the article

  • My server appears to have been hacked+ scanssh run by zabbix is it normal?

    - by Niro
    I'm running a few EC2/Scalr instances with zabbix monitoring. I received complaints about one of my servers port scanning other servers. the logs show it is accessing port 22 on consecutive IP addresses. I looked at the processes list and saw scanssh is running under the user Zabbix. My question is- Is scanssh part of zabbix? Is it suppesd to run? I have active autodiscovery on zabbix but it is looking at another IP addresses and definately not port 20. Is it possible that something in the config of zabbix agent is controlling it and not the settings on zabbix server? What can I do to find out if zabbix is somehow misbehaving or it is a hacker? Any advice is highly appreciated.

    Read the article

  • I cannot access Windows Update at all

    - by Cardinal fang
    I have been unable to access the Windows update site for a couple of weeks now. I just get a message saying "Internet Explorer cannot display the webpage" and saying I have connection problems. Same thing is replicated with any other Microsoft site I try to access. The Automatic Updates also do not work. I can access every other wesbite I've surfed to. I've tried Googling the problem and based on what other site have suggested I have cleared my cache and temp files. I've scanning my hard drive with my antivirus in case I have a virus (nada). I've tried turning off my firewall and anti-virus (I run Zone Alarm). I've downloaded SpyBot and scanned my drive with that in case something was missed by Zone Alarm (again nada). Based on suggestions from the smart cookies on the Bad Science forum, I've used nslookup to check my translation isn't wonky (got all the info they said I should get). I've also tried navigating there directly using the IP address I was given (nope). I normally access the internet through a 3 mobile broadband connection, but have also tried connecting using a mate's wi-fi connection in case it was something on my mobile modem interferring. I run Windows XP SP3 with Internet Explorer 7 and Zone Alarm Internet Security Suite as my anti-virus/ firewall. Any suggestions?

    Read the article

  • Log monitoring using Zabbix

    - by Supratik
    I am trying to monitor a log file using Zabbix 1.8.4. I created an item using the following details: Host: Zabbix server Description: logger_test Type: Zabbix agent (active) Key: log[/tmp/scribetest/test3/test3_current,error,,100] Type of Infromation: Log Update interval (in sec): 1 sec Keep history (in days): 90 Status: Active Applications: Log files I created a trigger and attached it with the item logger_test using the following details: Name: logger_test_trigger Expression: {Zabbix server:log[/tmp/scribetest/test3/test3_current,error,,100].str(error)}=1 Severity: disaster The above settings works fine for the first time but next time the trigger shows ZBX_NOTSUPPORTED and after that item also shows "not supported" message. Can you please tell me if anything I am doing wrong here ?

    Read the article

  • Move the uploads folder in Wordpress

    - by Victor Hurdugaci
    Currently, my Wordpress' upload folder is located in \wp-content\uploads. Initially there was no structure so all files were put there. After a while it was changed to upload the files in \wp-content\uploads\YEAR\MONTH. Now that folder contains a mix of files (those starting with + are folders) like: +wp-content | +2010 | | +02 | | | File-1 | | | File-2 | | | .. | | | File-n | | +01 | | | File-1 | | | File-2 | | | .. | | | File-n | +2009 | | +12 | | | File-1 | | | File-2 | | | .. | | | File-n | | +11 | | | File-1 | | | File-2 | | | .. | | | File-n | +.. | | | .. | Unstructured-file-1 | Unstructured-file-2 | ... | Unstructured-file-n Based on the dates of the unstructured files, I would like to move them in a structured hierarchy (based on date, move it to a folder \wp-content\uploads\YEAR\MONTH). Now, my questions are: Where do I write and execute a script to the movement (I don't have full access to the server, just to a cPanel and to the Wordpress Admin page)? What must be updated so that the links in posts, that reference the unstructured files, point to the new location of those files? Not fully related to the previous description: is it alright to move the whole uploads folder to another location, like \uploads? PS: Moving the files/updating the database manually is not an option :)

    Read the article

  • Puppet claims to be unable to resolve domains even if domain properly resolves

    - by gparent
    I have a fairly simple puppet setup, one master and one node, both running Debian Squeeze 6.0.4. I have DNS entries for the two machines, client and master respectively. Both client and master's DNS entries resolve correctly on both machines to the right IPs. On my client, I have this configuration: [main] server = master.example.org logdir=/var/log/puppet vardir=/var/lib/puppet ssldir=/var/lib/puppet/ssl rundir=/var/run/puppet factpath=$vardir/lib/facter pluginsync=true templatedir=/var/lib/puppet/templates Key exchange seems to fail, according to this messages in /var/log/syslog: localhost puppet-agent[11364]: Could not request certificate: getaddrinfo: Name or service not known Why is resolution not working only for puppet?

    Read the article

  • Knowledge and user generated content management system to track files, research, proposals, etc.?

    - by Eshwar
    I'll try keep it short. Here's the scenario: We have employees all over the world performing similar work i.e. research, generating powerpoint slides, word documents, graphics, etc. Many times a lot of this previous work can be reused for another future project. The current arrangement is email and phone calls which as you would agree is quick if you know where to look but otherwise archaic and very very inefficient. So I am looking for software that will allow me to do the following: Tag files e.g. an investor presentation on cellphone usage in kenya would be tagged investor, cellphone, kenya Manage references e.g. if we read something on the internet, should be able to paste that link in some fashion and tag it as above. Preferably cloud based so that it can be accessed by anybody and additionally would be nice (though NOT must) to have access levels (director, manager, everyone) A nice interface that non technically savvy folks can warm up to ;) A desktop app would be handy so that people don't always have to click upload or something A tree based system is inefficient in this case because content is usually linked across branches and also people might not quite agree on one format of a tree. Tagging works around this very nicely. What I have considered so far: Evernote (for its more professional look) Springpad (for its versatility with content) Mendeley (this is a research manager and in some ways ideal, but i fear its limited to PDFs) The goal is that when somebody wants to look for a document, they don't have to ask a colleague, they can just search with keywords and all relevant information shows up. Thanks!

    Read the article

  • Need advise for choosing software\hardware for virtualization.

    - by Anatoly
    Currently we have these servers : Windows SBS 2003 premium on IBM X266 double Xeon F43, 2GB ram. DC, exchange (70 users), Mssql. Windows 2003 R2 32bit on IBM x3400 with double XEON E5310 and 4GB ram. Terminal server (40+ users), ERP application based on uniPaaS platform from Magicsoftware, and Pervasive sql. Ubuntu 8.04 (simple pc box) with squid proxy, GLPI system and PHPBB3 forum for internal use. Recently number of concurrent users on Terminal server passed 40 users in rush hours and it gets stuck frequently. Therefore we need an upgrade. I think about transfer all physical servers to virtual servers based on cluster of 2 physical servers for reducing downtime. I think we will grow till 50-60 concurrent terminal users in rush hours. I also plan to virtualize 10-15 Win XP/7 workstation (office,ERP etc), and there is a little probability for Asterisk\Hylafax for 100 users (if it possible on same VM). Also we need NAS storage for 2-3TB. What hardware upgrade/purchase we need for complete this task? Which VM solution is preferable VmWare or Hyper-V? What backup software should we choose? Acronis or something another? Thank you in advance.

    Read the article

  • Hiera + Puppet classes

    - by Amadan
    I'm trying to figure out Puppet (3.0) and how it relates to built-in Hiera. So this is what I tried, an extremely simple example (I'll make a more complex hierarchy when I manage to get the simple one working): # /etc/puppet/hiera.yaml :backends: - yaml :hierarchy: - common :yaml: :datadir: /etc/puppet/hieradata # /etc/puppet/hieradata/common.yaml test::param: value # /etc/puppet/modules/test/manifests/init.pp class test ($param) { notice($param) } # /etc/puppet/manifests/site.pp include test If I directly apply it, it's fine: $ puppet apply /etc/puppet/manifests/site.pp Scope(Class[Test]): value If I go through puppet master, it's not fine: $ puppet agent --test Could not retrieve catalog from remote server: Error 400 on SERVER: Must pass param to Class[Test] at /etc/puppet/manifests/site.pp:1 on node <nodename> What am I missing? EDIT: I just left the office but a thought struck me: I should probably restart puppet master so it can see the new hiera.conf. I'll try that on Monday; in the meantime, if anyone figures out some not-it problem, I'd appreciate it :)

    Read the article

< Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >