Search Results

Search found 5152 results on 207 pages for 'scheduled tasks'.

Page 160/207 | < Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >

  • Group policy not applying to security group

    - by ihavenoideawhatimdoing
    Preface: I have enough privileges to create GPOs in my OU, and have made a few of them for some simple tasks (like deploying a printer to certain users). Not actually a sysadmin...I'm a developer who is winging it. I wanted to create a GPO that would set a mapped folder for a certain security group (which I recently created and that contains only myself). Did the following: Created the GPO in MyOU - Users Removed the default Authenticted Users under Security Filtering Add the security group with my account to Security Filtering Set up the mapping via the User Configuration option Changed GPO Status to "Computer configuration settings disabled" Left WMI filtering to Closed the GPO at this point... Logged in as the target user; ran gpupdate /force Logged out, logged in, ran gpresult /r, no mention of my GPO Rebooted Logged in, re-ran gpupdate /force Logged out, logged in, ran gpresult /r, still no mention of my GPO If I log in with another completely different user, their RSOP information shows that the new GPO is being ignored due to a security restriction, so it appears to be "working" for other users. I just can't get it to actually show up in RSOP for the user it should be working. Is there anything else I can do short of rebooting endlessly and crossing my fingers?

    Read the article

  • Choosing the right e-mail client

    - by CFP
    Hi all, I'm currently using Outlook 2007 (under windows 7), but I much prefer free software (open source being the best of course), so I thought I'd ask for expert advice here. I thought it might be easier if I included a small "wanted list": I receive about 15 to 30 e-mails every day, but I have large archives (10'000 emails), which I frequently need to access. I usually open and close my mail program many times, so I'd like it to start pretty fast I cannot use an online mailbox, because I have too many email addresses (about 5: 1 for work, 1 for home, 1 semi-private, 1 for specific emails, and 1 for newletters By order of importance, the things I'd like my mail client to be able to: Efficiently categorize e-mails. Until now, I've mostly been using Outlook folders, because filtering by tags was not easy, but I'd rather one large list of mails, neatly tagged so I can easily filter. I'd love being able to select mails by tags (eg in a click or too (could be a tab) show all mails tagged with "software") Create "tagging rules", such as "if the mail was sent to this address, add this tag", or "if the body contains ..., add that tag" Sync contacts with Gmail, handle tasks (syncing with toodledo would be awesome), possibly provide a calendar Create e-mail templates, signatures... Other ideas: A timeline, scripting support, being able to import MS Outlook emails, provide a nice backup format... Thanks for sharing ideas and suggestions!

    Read the article

  • What can cause Powershell execution policy not to be taken into account?

    - by Stephane
    We have in our infrastructure a number of powershell scripts used for various tasks ranging from user login to support technician simulating a user context. These scripts are centralized on our file server (through DFS) for easier management. Some of them are run at logon, some are run through published Citrix applications. We have applied a policy for the whole domain and all users that sets the Powershell execution policy to "unrestricted" so that the scripts can run from the file server. This works perfectly fine for logon script (at least, so far) but for scripts that are run later (usually through a published application but the same applies when using terminal services and a full desktop), the results are inconsistent: some users can run the script fine, some are always prompted in the powershell console for letting the scripts run. I cannot find anything that could cause this behavior and it's really inconsistent: if I start powershell manually and runs get-executionpolicy, I am told that the current policy is unrestricted. Yet, if from the same session I try to run a script through a program that calls powershell <script file name> <parameters> I get prompted before the script can run. What could cause such behavior ?

    Read the article

  • Fast distributed filesystem for a large amounts of data with metadata in database

    - by undefined hero
    My project uses several processing machines and one storage machine. Currently storage organized with a MSSQL filetable shared folder. Every file in storage have some metadata in database. Processing machines executes tasks for which they needed files from storage and their metadata. After completing task, processing machine puts resulting data back in storage. From there its taken by another processing machine, which also generates some file and put it back in storage. And etc. Everything was fine, but as number of processing machines increases, I found myself bottlenecked myself with storage machines hard drive performance. So I want processing machines to put files in distributed FS. to lift load from storage machines, from which they can take data from each other, not only storage machine. Can You suggest a particular distributed FS which meets my needs? Or there is another way to solve this problem, without it? Amounts of data in FS in one time are like several terabytes. (storage can handle this, but processors cannot). Data consistence is critical. Read write policy is: once file is written - its constant and may be only removed, but not modified. My current platform is Windows, but I'm ready to switch it, if there is a substantially more convenient solution on another one.

    Read the article

  • What can I do to prevent system power downs?

    - by Joe King
    Yesterday I was given my brother's old laptop - core i7, 2.67GHz, 8GB RAM, 128GB SSD, Win7 64 bit. It's a Sony Vaio Z11. Approx 18 months old. When running something computationally intensive, the fan starts up and after about 30 secs it just powers itself down with no warning. I guess it is overheating. There is nothing in the event logs to suggest what is causing it - the only thing I see is "the last system shutdown was unexpected" or something similar. This is a problem for me because I use a lot of number crunching apps, which pretty much makes it useless to me. I would like to know if there is anything I can do, other than the obvious things I've done already - open up and clean out dust, re-install the OS. According to my brother, this problem started about 6 months ago when it was already outside warranty. If it's just used for simple things - web browsing, word processing etc, the problem does not occur. Any ideas for what I can do to fix this ? Update: I found that the laptop has 2 hardware settings for graphics: Speed and Stamina - the Speed setting seems to use an nvidia GEforce GT 330M, while the Stamina setting uses an Intel chipset. With the setting on Speed, I can hear the fan the whole time, and the system powers down after a short while (5-10 mins) even just doing basic tasks (browsing this site for example), but doesn't shut down if I just leave it switched on. In this mode it also sometimes just freezes the screen and I have to power off myself. However on Stamina setting it only powers down when doing number crunching and never freezes the screen.

    Read the article

  • 0% CPU in top for all processes, but load average > 1

    - by chrisdew
    On two different servers (with Ubuntu 12.04LTS AMD64) I have seen the following behaviour: op - 10:50:05 up 305 days, 21:17, 1 user, load average: 1.94, 2.52, 2.97 Tasks: 141 total, 2 running, 139 sleeping, 0 stopped, 0 zombie Cpu(s): 41.5%us, 6.5%sy, 0.0%ni, 51.8%id, 0.0%wa, 0.2%hi, 0.1%si, 0.0%st Mem: 8178432k total, 5753740k used, 2424692k free, 159480k buffers Swap: 15625208k total, 0k used, 15625208k free, 4905292k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 20 0 23928 2072 1216 S 0 0.0 0:56.42 init 2 root 20 0 0 0 0 S 0 0.0 0:00.01 kthreadd 3 root RT 0 0 0 0 S 0 0.0 0:01.23 migration/0 4 root 20 0 0 0 0 S 0 0.0 2:39.82 ksoftirqd/0 5 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0 6 root RT 0 0 0 0 S 0 0.0 0:02.99 migration/1 7 root 20 0 0 0 0 S 0 0.0 2:32.15 ksoftirqd/1 8 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/1 9 root RT 0 0 0 0 S 0 0.0 0:11.67 migration/2 10 root 20 0 0 0 0 S 0 0.0 29:00.34 ksoftirqd/2 The server is working fine, but top shows all processes as using 0% CPU. A reboot fixed this on an earlier machine, but I haven't yet tried it on this one. I have tried top several times, and so am sure that I haven't accidentally pressed '<' or '' to sort by a different column. Sorting the process list by all of the available columns, stills shows 0% CPU for all displayed processes. What is going on? If this a kernel bug? Update: If I use top -p <PID> for a know, busy process, top still displays 0% CPU for that process.

    Read the article

  • Proper way to use hiera with puppetlabs-spec-helper?

    - by Lee Lowder
    I am trying to write some rspec tests for my modules. Most of them now use hiera. I have a .fixures.yml: fixtures: repositories: stdlib: https://github.com/puppetlabs/puppetlabs-stdlib.git hiera-puppet: https://github.com/puppetlabs/hiera-puppet.git symlinks: mongodb: "#{source_dir}" and a spec/classes/mongodb_spec.rb: require 'spec_helper' describe 'mongodb', :type => 'class' do context "On an Ubuntu install, admin and single user" do let :facts do { :osfamily => 'Debian', :operatingsystem => 'Ubuntu', :operatingsystemrelease => '12.04' } end it { should contain_user('XXXX').with( { 'uid' => '***' } ) should contain_group('XXXX').with( { 'gid' => '***' } ) should contain_package('mongodb').with( { 'name' => 'mongodb' } ) should contain_service('mongodb').with( { 'name' => 'mongodb' } ) } end end but when I run the spec test, I get: # rake spec /usr/bin/ruby1.8 -S rspec spec/classes/mongodb_spec.rb --color F Failures: 1) mongodb On an Ubuntu install, admin and single user Failure/Error: should contain_user('XXXX').with( { 'uid' => '***' } ) LoadError: no such file to load -- hiera_puppet # ./spec/fixtures/modules/hiera-puppet/lib/puppet/parser/functions/hiera.rb:3:in `function_hiera' # ./spec/classes/mongodb_spec.rb:15 Finished in 0.05415 seconds 1 example, 1 failure Failed examples: rspec ./spec/classes/mongodb_spec.rb:14 # mongodb On an Ubuntu install, admin and single user rake aborted! /usr/bin/ruby1.8 -S rspec spec/classes/mongodb_spec.rb --color failed Tasks: TOP => spec_standalone (See full trace by running task with --trace) Module spec testing is relatively new, as is hiera. So far I have been unable to find any suitable solutions. (the back and forth on puppet-dev was interesting, but not helpful). What changes do I need to make to get this to work? Installing puppet from a gem and hacking on rubylib isn't a viable solution due to corporate policy. I am using Ubuntu 12.04 LTS + Puppet 2.7.17 + hiera 0.3.0.

    Read the article

  • load average in top and procs in vmstat

    - by Mingfei.hua
    As far as I know, the load average in top is the numbers of precess(threads) in running or uninterrupted sleep status, So it should be equal to (procs-r +1 )+ procs-b in vmstat, but in practice, this two number always have big gap. Any wrongs in my understanding, appreciate so much if some guys give me some guide. top - 05:34:50 up 1 day, 20:56, 5 users, load average: 2.83, 2.67, 1.62 Tasks: 79 total, 1 running, 78 sleeping, 0 stopped, 0 zombie Cpu(s): 6.8%us, 1.8%sy, 0.0%ni, 91.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.4%st Mem: 1758000k total, 582636k used, 1175364k free, 103932k buffers Swap: 917500k total, 0k used, 917500k free, 180868k cached procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 0 0 0 1182524 103784 180860 0 0 1 9 6 53 7 2 91 0 0 0 0 0 1182524 103784 180860 0 0 0 36 70 117 0 0 100 0 0 0 0 0 1182516 103784 180860 0 0 0 0 73 132 0 1 100 0 0 0 0 0 1182516 103784 180860 0 0 0 0 60 127 0 0 100 0 0 1 0 0 1182516 103784 180860 0 0 0 0 62 102 0 0 100 0 0 0 0 0 1182628 103784 180860 0 0 0 0 289 238 1 2 97 0 0 2 0 0 1152160 103784 180892 0 0 0 8 1481 2371 54 12 34 0 0 1 0 0 1182192 103784 180860 0 0 0 0 681 834 19 4 78 0 0 0 0 0 1182200 103784 180860 0 0 0 0 80 147 0 1 100 0 0 0 0 0 1182200 103784 180860 0 0 0 0 53 107 0 0 100 0 0 0 0 0 1182208 103788 180856 0 0 0 72 64 123 0 0 100 1 0

    Read the article

  • No clue for high load average on top

    - by Oz.
    We have several machines on Amazon (ec2) of the type c1.xlarge with 16 cpus, running the Amazon AMI. Details on the machine: 7 GB of memory 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each) 1690 GB of instance storage 64-bit platform I/O Performance: High API name: c1.xlarge One out of the several machines is showing a high load average, since we have run the last yum upgrade a couple of weeks a go. We did not yet update the other machines, and everything looks normal on them. The strange thing is that the top command not showing any hint for the cause of the load. CPUs are 4.8%us, 1.1%sy, 0.0%ni, 94.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st(see below). Mem is about 1.5GB free. Any idea what could it be, or where else can we check? Many thanks for the help. # # top # top - 07:57:42 up 4:18, 1 user, load average: 1.36, 1.45, 1.47 Tasks: 131 total, 1 running, 130 sleeping, 0 stopped, 0 zombie Cpu(s): 4.8%us, 1.1%sy, 0.0%ni, 94.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 7120092k total, 5644920k used, 1475172k free, 532888k buffers Swap: 0k total, 0k used, 0k free, 3463936k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1557 mysql 20 0 1829m 374m 6448 S 14.3 5.4 11:15.09 mysqld 6655 apache 20 0 416m 49m 3744 S 9.3 0.7 0:04.85 httpd 27683 apache 20 0 421m 54m 3708 S 9.0 0.8 0:00.99 httpd 6682 apache 20 0 424m 57m 3788 S 8.3 0.8 0:03.81 httpd 16816 apache 20 0 419m 51m 3760 S 4.3 0.7 0:04.09 httpd 22182 apache 20 0 417m 50m 3756 S 1.7 0.7 0:06.34 httpd 219 root 20 0 0 0 0 S 0.3 0.0 0:00.34 kworker/7:1 699 root 20 0 0 0 0 S 0.3 0.0 0:00.40 kworker/3:1 1 root 20 0 19376 1508 1212 S 0.0 0.0 0:00.29 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 0:00.71 ksoftirqd/0

    Read the article

  • Translating IPTables rule to UFW

    - by Dario Fumagalli
    we are using an Ubuntu 12.04 x64 LTS VPS. Firewall being used is UFW. I have setup a Varnish + LEMP setup. along with other things, including an Openswan IPSEC VPN from our office to the VPS data center. A second in house Ubuntu box is to act as MySQL slave and fetch data from the VPS through the VPN. Master's ppp0 is seen as 10.1.2.1 from the slave, they ping etc. I have done the various required tasks but I can't get the client (slave) MySQL (nor telnet 10.1.2.1 3306) to access the master through the VPN unless I issue this fairly obvious IPTables command: iptables -A INPUT -s 10.1.2.0/24 -p tcp --dport 3306 -j ACCEPT I willingly forced the accepted input to come from the last octet. With this rule everything works just fine! However I want to translate this command to UFW syntax so to keep everything in one place. Now I admit being inexperienced with UFW, I prepared rules like: ufw allow proto tcp from 10.1.2.0/24 port mysql and 2-3 variations involving specifying 3306 instead of mysql, specifying a target IP (MySQL's my.cnf at the moment is configured as 0.0.0.0) and similar but I just don't seem to be able to replicate the simple iptables rule in a functional way. Anyone could kindly give me a suggestion that is not to dump UFW? Thanks in advance.

    Read the article

  • Apache process consumes too much CPU

    - by Niro
    I have an ubuntu apache/php server running php doing appx 100 hits/sec and a PHP cron running in the background. I get occasionally high CPU load on one of the Apache processes which stays high regardless of traffic or cron activity. It seems to me that its stuck in some kind of loop or something. Below you will find the top and strace info. How can I find where the bad code is and what causes this? top - 14:45:24 up 3 days, 3:38, 1 user, load average: 5.10, 5.88, 5.85 Tasks: 163 total, 5 running, 158 sleeping, 0 stopped, 0 zombie Cpu(s): 47.8%us, 18.5%sy, 0.0%ni, 10.2%id, 0.0%wa, 0.0%hi, 1.8%si, 21.6%st Mem: 7885012k total, 3858484k used, 4026528k free, 177444k buffers Swap: 0k total, 0k used, 0k free, 1037868k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 10736 www-data 20 0 769m 559m 478m R 69 7.3 29:08.30 apache2 10844 www-data 20 0 824m 601m 492m S 17 7.8 4:37.90 apache2 1016 root 20 0 242m 25m 4628 S 6 0.3 162:07.93 scalarizr 9030 www-data 20 0 879m 619m 492m S 4 8.0 5:06.82 apache2 20216 www-data 20 0 747m 228m 170m S 4 3.0 0:01.94 apache2 10807 www-data 20 0 814m 584m 492m S 3 7.6 4:54.10 apache2 10455 www-data 20 0 831m 574m 492m S 3 7.5 4:32.65 apache2 10495 www-data 20 0 849m 592m 492m S 3 7.7 4:41.10 apache2 10884 www-data 20 0 840m 581m 492m S 3 7.6 4:25.06 apache2 ^CProcess 10736 detached % time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 74.55 0.148052 1 109755 gettimeofday 25.36 0.050370 0 164634 clock_gettime 0.09 0.000178 0 54878 poll ------ ----------- ----------- --------- --------- ---------------- 100.00 0.198600 329267 total root@ec2-67-202-54-36:~# ^C

    Read the article

  • HyperV - low CPU usage

    - by Klark
    I am very new to HyperV and virtual machine philosophy in general, so please expect more or less nooby questions :) I have a server that is only used as a host for virtual machines. OS is windows server 2008 R2 and it is running on 16 CPU and 48 GBs of RAM. On aforementioned server there are 8 VMs, each having 4 CPUs and 4 GBs of RAM. On those VMs we are running some CPU intensive tasks. Each machine has nearly 100% cpu usage. After I noticed slow performance I went to the host machine and started playing with process explorer. It turned out that cpu usage is very low. Also I/O is very low, and of course, memory consumption is high, which is expected. Of course, I don't expect that those 4 virtual cores dedicated to a VM work as fast as real, hardware 4 cores, but still I expected a higher consumption of real hardware. Is this sort of behaviour normal? I see that the most of CPU usage on host machine are marked as interrupts (which I guess is normal) and all those interrupts are passed to only one core (which is strange). Are there out of box optimization that I could perform to finally use all that processing power that is under the hood. My knowledge of virtualization technology is near to embarrassing, so I would be grateful for any links that could enlightened me :) Thanks.

    Read the article

  • how could application installations/configurations be easier in linux?

    - by ajsie
    although you can do anything in linux it tends to require a lot of tweaking in config files and reading a lot of manuals/tutorials before you can have it running in your way. i know that it gets a lot easier by time, and the apt-get installations with ubuntu/debian is heading the right way. but how can linux be more userfriendly for us in the future? i thought that if more is automated like an IDE environment, eg. typing svn will give us all the commands and description about each command when you move between commands with your keyboard. that would be great. but that's just one example. another is the navigation in the terminal between folders. now you have to type a lot just to jump from/to different folders. would be great with some more automatization here too. i know that these extra features will slow down the server, but its 2010 now, and these features are not that heavy for the cpu, but makes it more userfriendly and encourage maintainance of a server, not frighten u off. what do you think about this? should/could we have more user friendly linux environment in servers, something that has annoyed you a lot? a lot of things are done in the unix way, but maybe we should reinvent the wheel in some areas, cause apparently, its so...repeatingly today and difficult to do easy tasks. it should be easier i think..

    Read the article

  • High CPU usage in my digitalocean droplet

    - by Ibrahim Azhar Armar
    I am experiencing high CPU usage here is the stats i got from server, the consumption after every restart in 15 minutes go upto 100%, what could go wrong? I have a wordpress copy installed on the server which does not have much traffic, here is the stats that i got from using top command in server. top - 11:46:02 up 12 min, 3 users, load average: 40.89, 16.03, 6.11 Tasks: 132 total, 41 running, 91 sleeping, 0 stopped, 0 zombie Cpu(s): 24.3%us, 61.5%sy, 0.0%ni, 0.0%id, 4.0%wa, 0.0%hi, 0.0%si, 10.2%st Mem: 2050896k total, 1988656k used, 62240k free, 284k buffers Swap: 0k total, 0k used, 0k free, 4712k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 31 root 20 0 0 0 0 R 39 0.0 1:35.53 kswapd0 899 root 20 0 15988 172 0 S 14 0.0 0:05.00 irqbalance 418 syslog 20 0 243m 600 0 S 13 0.0 0:06.85 rsyslogd 944 mysql 20 0 1320m 53m 0 S 12 2.7 0:21.15 mysqld 2357 root 20 0 17344 532 164 R 11 0.0 0:14.27 top 960 root 20 0 246m 3816 0 S 3 0.2 0:08.18 php5-fpm 2431 www-data 20 0 344m 64m 908 R 2 3.2 0:04.23 apache2 2435 www-data 20 0 304m 63m 836 R 2 3.2 0:03.43 apache2 2413 www-data 20 0 349m 63m 920 R 2 3.2 0:07.51 apache2 2465 www-data 20 0 349m 64m 944 R 2 3.2 0:05.04 apache2 2518 www-data 20 0 307m 41m 1204 R 2 2.1 0:01.37 apache2 2406 www-data 20 0 346m 56m 1144 R 2 2.8 0:03.76 apache2 2456 www-data 20 0 345m 55m 1184 R 2 2.8 0:02.67 apache2 2373 www-data 20 0 351m 63m 784 R 2 3.2 0:11.09 apache2 2439 www-data 20 0 306m 35m 916 R 2 1.8 0:02.51 apache2 2450 www-data 20 0 345m 55m 1088 R 2 2.8 0:02.96 apache2 2486 www-data 20 0 299m 10m 876 R 2 0.5 0:01.19 apache2 2523 www-data 20 0 300m 27m 796 R 2 1.4 0:00.99 apache2

    Read the article

  • FreeBSD's ng_nat stopping pass the packets periodically

    - by Korjavin Ivan
    I have FreeBSD router: #uname 9.1-STABLE FreeBSD 9.1-STABLE #0: Fri Jan 18 16:20:47 YEKT 2013 It's a powerful computer with a lot of memory #top -S last pid: 45076; load averages: 1.54, 1.46, 1.29 up 0+21:13:28 19:23:46 84 processes: 2 running, 81 sleeping, 1 waiting CPU: 3.1% user, 0.0% nice, 32.1% system, 5.3% interrupt, 59.5% idle Mem: 390M Active, 1441M Inact, 785M Wired, 799M Buf, 5008M Free Swap: 8192M Total, 8192M Free PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 11 root 4 155 ki31 0K 64K RUN 3 71.4H 254.83% idle 13 root 4 -16 - 0K 64K sleep 0 101:52 103.03% ng_queue 0 root 14 -92 0 0K 224K - 2 229:44 16.55% kernel 12 root 17 -84 - 0K 272K WAIT 0 213:32 15.67% intr 40228 root 1 22 0 51060K 25084K select 0 20:27 1.66% snmpd 15052 root 1 52 0 104M 22204K select 2 4:36 0.98% mpd5 19 root 1 16 - 0K 16K syncer 1 0:48 0.20% syncer Its tasks are: NAT via ng_nat and PPPoE server via mpd5. Traffic through - about 300Mbit/s, about 40kpps at peak. Pppoe sessions created - 350 max. ng_nat is configured by by the script: /usr/sbin/ngctl -f- <<-EOF mkpeer ipfw: nat %s out name ipfw:%s %s connect ipfw: %s: %s in msg %s: setaliasaddr 1.1.%s There are 20 such ng_nat nodes, with about 150 clients. Sometimes, the traffic via nat stops. When this happens vmstat reports a lot of FAIL counts vmstat -z | grep -i netgraph ITEM SIZE LIMIT USED FREE REQ FAIL SLEEP NetGraph items: 72, 10266, 1, 376,39178965, 0, 0 NetGraph data items: 72, 10266, 9, 10257,2327948820,2131611,4033 I was tried increase net.graph.maxdata=10240 net.graph.maxalloc=10240 but this doesn't work. It's a new problem (1-2 week). The configuration had been working well for about 5 months and no configuration changes were made leading up to the problems starting. In the last few weeks we have slightly increased traffic (from 270 to 300 mbits) and little more pppoe sessions (300-350). Help me please, how to find and solve my problem?

    Read the article

  • Laptop Randomly Turning On and Off

    - by Ian Mallett
    So, I have a pretty new laptop, and one of its quirks is that, at random times (though typically in the middle of the night), it seems to wake up from sleep mode, churn a bit, and then go back into sleep mode. I write "seems" because its fans are very loud, so it's obvious when it's not asleep, but during the time it is "on", I can't see anything on the screen. I have researched the problem somewhat, and could only find similar issues; nothing identical. In those cases, it appeared that certain devices could be responsible. Nothing is plugged into my computer during this behavior, but I nonetheless disabled every device's permission to wake the computer through the device manager. This included disabling the magic packet wake for the network (despite its only having a wireless connection). Using "powercfg /lastwake" gives an empty wake history. But, I also went through all the tasks and checked if they would wake the computer. None appeared to. The problem persisted, so, after some more research, I found this, and executed it for all power schemes on the computer. The problem persists. System: OS: Windows 7 Professional CPU: Intel 990X GPU: NVIDIA GeForce 580M/12GB RAM Motherboard: Clevo X7200 Model: NP7282-S1 (Sager-built laptop)

    Read the article

  • Restrict SSH user to connection from one machine

    - by Jonathan
    During set-up of a home server (running Kubuntu 10.04), I created an admin user for performing administrative tasks that may require an unmounted home. This user has a home directory on the root partition of the box. The machine has an internet-facing SSH server, and I have restricted the set of users that can connect via SSH, but I would like to restrict it further by making admin only accessible from my laptop (or perhaps only from the local 192.168.1.0/24 range). I currently have only an AllowGroups ssh-users with myself and admin as members of the ssh-users group. What I want is something that works like you may expect this setup to work (but it doesn't): $ groups jonathan ... ssh-users $ groups admin ... ssh-restricted-users $ cat /etc/ssh/sshd_config ... AllowGroups ssh-users [email protected].* ... Is there a way to do this? I have also tried this, but it did not work (admin could still log in remotely): AllowUsers [email protected].* * AllowGroups ssh-users with admin a member of ssh-users. I would also be fine with only allowing admin to log in with a key, and disallowing password logins, but I could find no general setting for sshd; there is a setting that requires root logins to use a key, but not for general users.

    Read the article

  • How to make project auto-estimate duration based on work?

    - by Bruno Brant
    This one has bothered me for a long while. I like to do estimates thinking on how much time a certain task will take (I'm in TI business), so, let's say, it takes 12 hours to build a program. Now, let's say I tell Project that my beginning date is today. If I allocate one resource to this task, it means that the task will last 1,5 days, implying that it will end tomorrow. But right now, that is not what it's doing. I say that the task will take 1 hour, and when I add a resource to it, it allocate the resource at [13%] basis, which means that the duration is still fixed... project is trying to make the task last for a day. I have, on many occasions, accomplished this. What I do is build a plan based on these rough estimates for effort, then I allocate tasks to resources. Times conflict, so I level resources and then Project magically tells me how long, in days, will it take. But every time I have to start estimating again, I end up having trouble on how to make project work like that.

    Read the article

  • Manipulating IIS7 Configuration with Powershell

    - by John Price
    I'm trying to script some IIS7 setup tasks in PowerShell using the WebAdministration module. I'm good with setting simple properties, but am having some trouble when it comes to dealing with collections of elements in the config files. As an immediate, example, I want to have application pools recycle on a given schedule, and the schedule is a collection of times. When I set this up via the IIS management console, the relevant config section looks like this: <add name="CVSupportAppPool"> <recycling> <periodicRestart memory="1024" time="00:00:00"> <schedule> <clear /> <add value="03:31:00" /> </schedule> </periodicRestart> </recycling> </add> In my script I would like to be able to accomplish the same thing, and I have the following two lines: #clear any pre-existing schedule Clear-WebConfiguration "/system.applicationHost/applicationPools/add[@name='$($appPool.Name)']/recycling/periodicRestart/schedule" #add the new schedule Add-WebConfiguration "/system.applicationHost/applicationPools/add[@name='$($appPool.Name)']/recycling/periodicRestart/schedule" -value (New-TimeSpan -h 3 -m 31) That does almost the same thing, but the resulting XML lacks the <clear/> element that is created with via the GUI, which I believe is necessary to avoid inheriting any default values. This sort of collection (with "add", "remove", and "clear" elements) seems to be fairly standard in the config files, but I can't seem to find any good documentation on how to interact with them consistently.

    Read the article

  • Calling Excel from PHP 5 through COM fails on Windows 7 when Apache started through Task Planner

    - by Stefan Pantke
    I currently write an application, which controls Excel through COM: The app creates a COM-based Excel instance, opens some XLS files and reads their contents. Scenario I On Windows 7, I start Apache and mySQL using xmapp-control with system administrator rights. All works as expected. The PHP-based controller script interacts with Excel as expected. Scenario II A problem appears, if I start Apache and mySQL as 'background jobs'. Here is how: I created two jobs using Windows 7 Task Planner. One runs apache_start.bat, the other runs mysql_start.bat. Both tasks run as SYSTEM with elevated privileges when Windows 7 boots. Apache and mySQL work as expected. Specifically, Apache serves HTTP request from clients and PHP is able to talk to mySQL. When I call the PHP controller, which calls and interacts with Excel using COM, I do receive an error. The error seems to come from Excel [not COM itself] and reads like this: Excel can't read the XLS-file Excel failed to save the file due to an ill-name worksheet Interestingly, during the first run of the PHP-based controller script, it takes a few seconds to render the error message. Each subsequent run immediately renders the error message. Windows system logs didn't show a single problem report entry. Note, that the PHP program and the Apache instance didn't change - except the way Apache was started. At least the PHP controller script is perfectly able to read the file-system, since it provides the pathes to the XLS-file through scandir() of a certain directory. Concurrency issues can't be the cause of the problem. A single instance of the specific PHP controller interacts with Excel. Question Could someone provide details, why this happens?

    Read the article

  • How should I perform database maintenance on a 24x7 system

    - by solublefish
    I'm a software developer who inherited a part-time DBA role. I'm responsible for an application backed by a small, high-volume 24x7 database on SQL Server 2008. While there's other stuff in the DB, the critical piece is a 50GB, 7.5M row table that serves 100K requests/sec during peak load, and about half that at "night". This is 99%+ read traffic, but the writes are constant, and required. I need to be able to perform periodic maintenance without a maintenance window. Say an index rebuild, a job to purge old data, Windows Update, or hardware upgrade. Most of the advice I've seen is along the lines of "MAKE a maintenance window." While I appreciate the sentiment, I hope there's another way. If it will solve this problem, I do have the ability to purchase new hardware or modify the database, the clients (a set of web services servers), and much of the application code (ADO.NET + ASP.NET). I've been thinking along the lines of using the warm spare (or a 3rd server) to do the maintenance, and then "swap" it into production. 1 Synchronize the spare by restoring backups, including a current transaction log. 2 Perform the maintenance tasks. 3 Reconfigure clients to connect to the spare server. Existing connections are finished within a minute or so. 4 The spare server is now the production server. The problem remaining is that the new production server is now out of date by however long it took to perform maintenance. Is there some way that the original production server can be made to queue up changes and merge them to the spare between steps 2 and 3? Any other ideas?

    Read the article

  • xauth, ssh and missing home directory

    - by flolo
    We have several servers, and normaly everything works fine, except now... we get a new aircondition installed. This takes 36 hours and for this time almost all servers got shutdown, only 2 remaining servers run for the most important tasks (i.e. accepting incoming email, delivering some important websites, login-server). Everybody was informed that when they need appropiate data from the homedirs they should fetch it before take down. Long story short: Someone realized that he have run a certain program on one of the servers. No Problem, he can remote login into our login server and run the programm there without home directory (binaries are local and necessary information can be copied to the /tmp). That works like a charm until... ... the user needs to run a GUI programm. I find no easy way to make it running, usually ssh -Y honk@loginserver is enough but now the homedirectory is missing and ssh is not able to copy the cookies into ~/.Xauthority (as the file server with the home directories is down). Paranoid as all systemadmins all X-Server just listen locally not on tcp ports, so no remote X connection possible SSH config is waterproof - i.e. no way to set environment variables. My Problem is, that the generated proxy MIT cookie from ssh get lost as the .Xauthority doesnt exist. If I could retrieve it somehow I could reenter it a .Xauthority in /tmp. The only other option (besides changing the config) which came to my mind is, makeing a tunnel (netcat, or better ssh) from the remote host to the loginserver and copy the cookie manually (not sure if it the tcp-unix domain socket stuff works as expected). Any good suggestions (for the future - now our servers are already up)?

    Read the article

  • Problems using InfraRecorder to back up ISOs of certain CDs

    - by Voyagerfan5761
    I've gotten into the habit of backing up my CDs as ISO files, just in case the discs should be damanged, lost, or destroyed. Using InfraRecorder, the process is pretty painless. Unfortunately, I have run into at least two discs that don't back up. I get the error message: Can't read source disc. Retrying from sector 252270 Sometimes this will appear repeatedly. One of the discs is my retail copy of Star Trek: Armada II; the other is disc one of DOOM 3. Both discs run flawlessly when I put them in the drive and let Windows AutoPlay them. Armada II appears as two tracks (one data, one audio) in InfraRecorder, and the error happens at the approximate track boundary. DOOM 3's first disc, however, fails much sooner (around sector 990) and appears as one solid data track. Am I simply using the wrong tools for this job? InfraRecorder is a nice free tool that I can run from my flash drive and use for most tasks of this type, but it does seem to have trouble with certain things. Ideally I'd like to hear about any workarounds people have found for this issue, but if I must switch tools I'm open to it (preferably other free tools).

    Read the article

  • Mac mini simple customized, Mac mini server or other?

    - by microspino
    I'm in front of a big IT choice for my little office and I need some advice. We have 5 users, 1 super user, 1 HP500 DesignJet Plotter, other 4 laser printers, 1 HP Fax/Print/Scan/Copy machine. All the clients are XP Sp3 boxes. We would like to: centralize and share 90Gb of files using a Dropbox (this way we will have LAN sync of local working directories + internet backup + access our files wherever we are). centralize our plotter, printers and fax machine backup all the workstations share outlook calendar and tasks run 24x7 saving some energy Of course this setup It's just the first step to a more serious and creative network management of our office, so we are open to new ideas. The budget vary from 400€ to 900€, we are not tech gurus but at least one of us is a power user close to become a geek. I've read some articles on macminicolo about a mac mini either normal or with snow leopard server. I heard about Windows Home Server too on the lifehacker website but I'm in a sort of analysis - paralysis can You help me?

    Read the article

  • Work from home on an iPad?

    - by Alex Basson
    The situation: My wife has a 13" MacBook Pro that she uses for email, Facebook, web surfing, and working from home. I'm about to buy us our first iPad. My wife's brother's computer just went belly-up, and she's contemplating giving him her MacBook and just using the iPad. The question is whether or not this is possible or realistic. Obviously, the iPad is well-suited for the email/web/Facebook tasks, but the working-from-home thing is an absolute must -- if the iPad can't handle that, it's a deal-breaker. For my wife, working from home means two things: Accessing her workplace computer's Windows Vista desktop, which she currently does via Remote Desktop. Editing Office documents locally, which she currently syncs via Dropbox. Being able to edit documents locally is important, because sometimes she will download documents and edit them when she doesn't have network access (e.g. on the subway). I'm more than happy to get a keyboard dock for her, so typing won't be an issue. Are there any iPad apps she can use to access her work computer and edit her work files? Thanks for any suggestions!

    Read the article

< Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >