Search Results

Search found 11321 results on 453 pages for 'shared libraries'.

Page 404/453 | < Previous Page | 400 401 402 403 404 405 406 407 408 409 410 411  | Next Page >

  • Managing multiple independant domains with Google Apps

    - by Saif Bechan
    I am currently running a server where I have multiple domains with all of them running there own mail server. My plan is to outsource this whole email service and have Google, or competitor, do this for me. Let me start by telling you the setup I have now and want to migrate to Google. Initial setup I have a main domain where I run my server, and my nameserver. This is an important domain because this holds the connection with all my internal applications. For example log messages, cronjob messages, and virus-scan messages are sent to this domain. This email is also registered at my registrar and I use it to communicate with my ISP. Next I run a few independent websites that all need their independent email addresses. This can be on shared space, I don't mind. 1 Gig will be enough for everything I am going to do. Summary: superdomain.com (which only has a catchall for internal use and communication with my ISP) cars.com (independent) flowers.com (independent) foods.com (independent) I am going to be the admin for all of this. The independent domains don't need there own admin panel, they just need email addresses like info@ support@, etc. I do all the managing and they just send and receive emails using the accounts i give them. All of the websites have there different staff that use the accounts. Tried so far I have registered my superdomain, but I can only add aliases to the main domain. If I make all the other domains aliases the emails from [email protected] and [email protected] will have the same inbox. I want them to be separate. is the only way to achieve this by creating an account for each domain? And if so, is there no way of creating a superdomain account where I can edit all these accounts easily without having to log in 4 different places to get my work done. I have searched the Google help forums, and posted questions but without any results so far. Questions Can anyone please give me some advice on what to do. I currently use the free program Google has.

    Read the article

  • How to set up a file server in a restricted corporate environment

    - by Emilio M Bumachar
    I work in a big corporation, and the disk space my team gets in the corporate file server is so low, I am considering turning my work PC into a file server. I ask this community for links to tutorials, software suggestions, and advice in general about how to set it up. My machine is an Intel Core2Duo E7500 @ 3GHz, 3 GB of RAM, Running Windows XP Service Pack 3. Upgrading, formatting or installing another OS is out of the question. But I do have Administrator priviledges on the PC, and I can install programs (at least for now). A lot of security software I don't even know about is and must remain installed. But I only need communication whithin the corporate network, which is not restricted. People have usernames (logins) on the corporate network, and I need to use them to restrict access. Simply put, I have a list of logins of team members, and only people in the list should access the files. I have about 150 GB of free disk space. I'm thinking of allocating 100 GB to the team's shared files. I plan monthly backups on machines of co-workers, same configuration. But automation of backups is a nice, unnecessary feature: it's totally acceptable for me to manually copy the contents to a different machine once a month. Uptime is important, as everyone would use these files in their daily work. I have experience as a python and C programmer, but no experience whatsoever as a sysadmin, and almost nothing of my programming experience is network programming. I'm a complete beginner in this. Thanks in advance for any help. EDIT I honestly appreciate all the warnings, I really do, but what I plan to make available is mostly stuff that now is solely on DVDs just for space reasons. It's 'daily work' to read them, but 'daily work write' files will remain on the corporate server. As for the importance of uptime, I think I overstated it: a few outages are OK, it's already an improvement over getting the DVDs. As for policy, my manager is kind of on my side, I will confirm that before making my move. As for getting more space through the proper channels, well, that was Plan A, and it's still on the table... But I don't have much hope. I'm not as "core businees" as I'd like.

    Read the article

  • Keep source IP after NAT

    - by John Miller
    Until today I used a cheapy router so I can share my internet connection and keep a webserver online too, while using NAT. Users IP ($_SERVER['REMOTE_ADDR']) was fine, I was seeing class A IPs of users. But as traffic grown up everyday, I had to install a Linux Server (Debian) to share my Internet Connection, because my old router couldn't keep the traffic anymore. I shared the internet via IPTABLES using NAT, but now, after forwarding port 80 to my webserver, now instead of seeing real users IP, I see my Gateway IP (Linux Internal IP) as any user IP Address. How to solve this issue? I edited my post, so I can paste the rules I'm currently using. #!/bin/sh #I made a script to set the rules #I flush everything here. iptables --flush iptables --table nat --flush iptables --delete-chain iptables --table nat --delete-chain iptables -F iptables -X # I drop everything as a general rule, but this is disabled under testing # iptables -P INPUT DROP # iptables -P OUTPUT DROP # these are the loopback rules iptables -A INPUT -i lo -j ACCEPT iptables -A OUTPUT -o lo -j ACCEPT # here I set the SSH port rules, so I can connect to my server iptables -A INPUT -p tcp --sport 513:65535 --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp --sport 22 --dport 513:65535 -m state --state ESTABLISHED -j ACCEPT # These are the forwards for 80 port iptables -t nat -A PREROUTING -p tcp -s 0/0 -d xx.xx.xx.xx --dport 80 -j DNAT --to 192.168.42.3:80 iptables -t nat -A POSTROUTING -o eth0 -d xx.xx.xx.xx -j SNAT --to-source 192.168.42.3 iptables -A FORWARD -p tcp -s 192.168.42.3 --sport 80 -j ACCEPT # These are the forwards for bind/dns iptables -t nat -A PREROUTING -p udp -s 0/0 -d xx.xx.xx.xx --dport 53 -j DNAT --to 192.168.42.3:53 iptables -t nat -A POSTROUTING -o eth0 -d xx.xx.xx.xx -j SNAT --to-source 192.168.42.3 iptables -A FORWARD -p udp -s 192.168.42.3 --sport 53 -j ACCEPT # And these are the rules so I can share my internet connection iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE iptables -A FORWARD -i eth0:1 -j ACCEPT If I delete the MASQUERADE part, I see my real IP while echoing it with PHP, but I don't have internet. How to do, to have internet and see my real IP while ports are forwarded too? ** xx.xx.xx.xx - is my public IP. I hid it for security reasons.

    Read the article

  • My network drive disappears from Mac OS Finder

    - by Mariusz
    I have recently bought a Netgear WNDR3800 router to use it in my home network. But just the same day I installed it, I noticed a strange behaviour of Finder and iTunes. Let me explain it further. There is a Synology DS111 NAS attached to that router and two Macs with Mac OS X Lion. One of them is connected by a cable and the second one wirelessly. Before I changed my router to the new one I mentioned above, Finder always used to display my NAS on its sidebar. So I could just click its network name to access shared folders existing on it. But after I installed WNDR3800, I can no longer access the NAS that way. It is no longer displayed. I always have to mount it manually by typing its IP address using the Finder's 'connect to server' option. The same NAS supports TimeMachine backups and has an inbuilt DLNA server. And the same situation here. I can't perform a backup because my NAS is no longer accessible in TimeMachine preferences. iTunes does not display it as well (as a multimedia server) even though it used to before I installed that router. What's important, everything works fine for a couple of minutes after I restart the router or the NAS. Or even when I change the NAS's IP address it becomes accessible again in Finder, TimeMachine and iTunes, but only for some time. Both the Mac computers I mentioned behave the same way. And all those issues have been taking place sice I installed that new router. Before I did that, everything had worked fine. My old router was Netgear WGR614v10. Would you be so kind to tell me what you think could possibly be the reason of that behaviour? What settings of the router should I look closer at? I'm not a network specialist, but is it possible that some network packets are blocked for some reason? I will be grateful for any clues you give me. Thank you.

    Read the article

  • nginx + php fpm -> 404 php pages - file not found

    - by Mahesh
    *2037 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream server { listen 80; ## listen for ipv4; this line is default and implied #listen [::]:80 default ipv6only=on; ## listen for ipv6 server_name .site.com; root /var/www/site; error_page 404 /404.php; access_log /var/log/nginx/site.access.log; index index.html index.php; if ($http_host != "www.site.com") { rewrite ^ http://www.site.com$request_uri permanent; } location ~* \.php$ { fastcgi_index index.php; fastcgi_pass 127.0.0.1:9000; #fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_buffer_size 128k; fastcgi_buffers 256 4k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_read_timeout 240; include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param SCRIPT_NAME $fastcgi_script_name; } location ~ /\. { access_log off; log_not_found off; deny all; } location ~ /(libraries|setup/frames|setup/libs) { deny all; return 404; } location ~ ^/uploads/(\d+)/(\d+)/(\d+)/(\d+)/(.*)$ { alias /var/www/site/images/missing.gif; #i need to modify this to show only missing files. right now it is showing missing for all the files. } location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ { access_log off; expires 20d; } location /user_uploads/ { location ~ .*\.(php)?$ { deny all; } } location ~ /\.ht { deny all; } } php-fpm config is default and is not touched. The problem is little strange for me. Error pages are showing File not found only if they are .php files. Other error files are clearly calling the 404.php file site.com/test = calls 404.php site.com/test.php = File not found. I am searching and making changes. but it hasn't solved the problem.

    Read the article

  • NFS4 / ZFS: revert ACL to clean/inherited state

    - by Keiichi
    My problem is identical to this Windows question, but pertains NFS4 (Linux) and the underlying ZFS (OpenIndiana) we are using. We have this ZFS shared via NFS4 and CIFS for Linux and Windows users respectively. It would be nice for both user groups to benefit from ACLs, but the one missing puzzle piece goes thusly: Each user has a home, where he sets a top-level, inherited ACL. He can later on refine permissions for the contained files/folders iteratively. Over time, sometimes permissions need to be generalized again to avoid increasing pollution of ACL entries. You can tweak the ACL of every single file if need be to obtain the wanted permissions, but that defeats the purpose of inherited ACLs. So, how can an ACL be completely cleared like in the question linked above? I have found nothing about what a blank, inherited ACL should look like. This usecase simply does not seem to exist. In fact, the solaris chmod manpage clearly states A- Removes all ACEs for current ACL on file and replaces current ACL with new ACL that represents only the current mode of the file. I.e. we get three new ACL entries filled with stuff representing the permission bits, which is rather useless for cleaning up. If I try to manually remove every ACE, on the last one I get chmod A0- <file> chmod: ERROR: Can't remove all ACL entries from a file Which by the way makes me think: and why not? In fact, I really want the whole file-specific ACL gone. The same holds for linux, which enumerates ACEs starting with 1(!), and verbalizes its woes less diligently nfs4_setacl -x 1 <file> Failed setxattr operation: Unknown error 524 So, what is the idea behind ACLs under Solaris/NFS? Can they never be cleaned up? Why does the recursion option for the ACL setting commands pollute all children instead of setting a single ACL and making the children inherit? Is this really the intention of the designers? I can clean up the ACLs using a windows client perfectly well, but am I supposed to tell the linux users they have to switch OS just to consolidate permissions?

    Read the article

  • Getting error while installing mod_wsgi in centos

    - by user825904
    I have reinstalled python with enable shared [root@master mod_wsgi-3.4]# make clean rm -rf .libs rm -f mod_wsgi.o mod_wsgi.la mod_wsgi.lo mod_wsgi.slo mod_wsgi.loT rm -f config.log config.status rm -rf autom4te.cache [root@master mod_wsgi-3.4]# LD_RUN_PATH=/usr/local/lib make apxs -c -I/usr/local/include/python2.7 -DNDEBUG mod_wsgi.c -L/usr/local/lib -L/usr/local/lib/python2.7/config -lpython2.7 -lpthread -ldl -lutil -lm /usr/lib64/apr-1/build/libtool --silent --mode=compile gcc -prefer-pic -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -Wformat-security -fno-strict-aliasing -DLINUX=2 -D_REENTRANT -D_GNU_SOURCE -pthread -I/usr/include/httpd -I/usr/include/apr-1 -I/usr/include/apr-1 -I/usr/local/include/python2.7 -DNDEBUG -c -o mod_wsgi.lo mod_wsgi.c && touch mod_wsgi.slo In file included from /usr/local/include/python2.7/Python.h:8, from mod_wsgi.c:142: /usr/local/include/python2.7/pyconfig.h:1161:1: warning: "_POSIX_C_SOURCE" redefined In file included from /usr/include/sys/types.h:26, from /usr/include/apr-1/apr-x86_64.h:127, from /usr/include/apr-1/apr.h:19, from /usr/include/httpd/ap_config.h:25, from /usr/include/httpd/httpd.h:43, from mod_wsgi.c:34: /usr/include/features.h:162:1: warning: this is the location of the previous definition In file included from /usr/local/include/python2.7/Python.h:8, from mod_wsgi.c:142: /usr/local/include/python2.7/pyconfig.h:1183:1: warning: "_XOPEN_SOURCE" redefined In file included from /usr/include/sys/types.h:26, from /usr/include/apr-1/apr-x86_64.h:127, from /usr/include/apr-1/apr.h:19, from /usr/include/httpd/ap_config.h:25, from /usr/include/httpd/httpd.h:43, from mod_wsgi.c:34: /usr/include/features.h:164:1: warning: this is the location of the previous definition mod_wsgi.c: In function ‘wsgi_server_group’: mod_wsgi.c:991: warning: unused variable ‘value’ mod_wsgi.c: In function ‘Log_isatty’: mod_wsgi.c:1665: warning: unused variable ‘result’ mod_wsgi.c: In function ‘Log_writelines’: mod_wsgi.c:1802: warning: unused variable ‘msg’ mod_wsgi.c: In function ‘Adapter_output’: mod_wsgi.c:3087: warning: unused variable ‘n’ mod_wsgi.c: In function ‘Adapter_file_wrapper’: mod_wsgi.c:4138: warning: unused variable ‘result’ mod_wsgi.c: In function ‘wsgi_python_term’: mod_wsgi.c:5850: warning: unused variable ‘tstate’ mod_wsgi.c:5849: warning: unused variable ‘interp’ mod_wsgi.c: In function ‘wsgi_python_child_init’: mod_wsgi.c:7050: warning: unused variable ‘l’ mod_wsgi.c:6948: warning: unused variable ‘interp’ mod_wsgi.c: In function ‘wsgi_add_import_script’: mod_wsgi.c:7701: warning: unused variable ‘error’ mod_wsgi.c: In function ‘wsgi_add_handler_script’: mod_wsgi.c:8179: warning: unused variable ‘dconfig’ mod_wsgi.c:8178: warning: unused variable ‘sconfig’ mod_wsgi.c: In function ‘wsgi_hook_handler’: mod_wsgi.c:9375: warning: suggest parentheses around assignment used as truth value mod_wsgi.c:9377: warning: suggest parentheses around assignment used as truth value mod_wsgi.c:9379: warning: suggest parentheses around assignment used as truth value mod_wsgi.c:9383: warning: suggest parentheses around assignment used as truth value mod_wsgi.c:9403: warning: suggest parentheses around assignment used as truth value mod_wsgi.c:9405: warning: suggest parentheses around assignment used as truth value mod_wsgi.c:9408: warning: suggest parentheses around assignment used as truth value mod_wsgi.c: In function ‘wsgi_daemon_worker’: mod_wsgi.c:10819: warning: unused variable ‘duration’ mod_wsgi.c:10818: warning: unused variable ‘start’ mod_wsgi.c: In function ‘wsgi_hook_daemon_handler’: mod_wsgi.c:13172: warning: unused variable ‘i’ mod_wsgi.c:13170: warning: unused variable ‘elts’ mod_wsgi.c:13169: warning: unused variable ‘head’ mod_wsgi.c: At top level: mod_wsgi.c:8142: warning: ‘wsgi_set_user_authoritative’ defined but not used mod_wsgi.c:15251: warning: ‘wsgi_hook_check_user_id’ defined but not used /usr/lib64/apr-1/build/libtool --silent --mode=link gcc -o mod_wsgi.la -rpath /usr/lib64/httpd/modules -module -avoid-version mod_wsgi.lo -L/usr/local/lib -L/usr/local/lib/python2.7/config -lpython2.7 -lpthread -ldl -lutil -lm

    Read the article

  • Virtual Private Hosting DNS configuration

    - by Ciel
    I did a great deal of reading here before posting this because I didn't want to post a duplicate - but I'm on a bit of a deadline and getting frustrated, so here goes... I very, very, very sincerely apologize if this is long winded or hard to read. Please - please just ask for any information or clarification and I will give it as quickly as I possibly can. This has become very frustrating to me and this is the last place I know to turn. I have no experience with setting up DNS, no experience with nameservers, and no peers to go to for help. So this is kind of my last ditch effort. The task of setting up a private server has, through circumstances beyond my control, fallen into my lap. I own a domain (hereafter referred to as yyy.com) and have always used shared hosting - I buy a package and just point it to the domain nameservers they give me. It's always been simple. yyy.com is registered with network solutions Now I have purchased a Virtual Private Hosting package from GoDaddy.com - and it comes with Plesk 11. I have no earthly idea how to begin to get the right nameserver for yyy.com. I have gone through the instructions and have wound up exceedingly frustrated. I have 2 IP addresses from GoDaddy for the server. This is what I have so far, and I cannot tell if it is working (Since propogation takes so long, it is extremely hard to test for me) IP 1 : XX.XX.XX.XX IP 2 : YY.YY.YY.YY (obviously hidden for privacy) Now after going through the documentation setup and waiting a few days, this is the setup I have - and so far it does not appear to be working. Host Record type Value XX.XX.XX.XX / 24 PTR yyy.com. yyy.com. NS ns1.yyy.com. yyy.com. A XX.XX.XX.XX yyy.com. MX (10) mail.yyy.com. ftp.yyy.com. CNAME yyy.com. ipv4.yyy.com. A XX.XX.XX.XX mail.yyy.com. A XX.XX.XX.XX mssql.yyy.com. A XX.XX.XX.XX ns1.yyy.com. A XX.XX.XX.XX ns2.yyy.com. A YY.YY.YY.YY webmail.yyy.com. A XX.XX.XX.XX www.yyy.com. CNAME yyy.com. yyy.com is pointing to both ns1.yyy.com and ns2.yyy.com Can anyone give me some assistance here? This is a learning experience for me and days of documentation have left me very confused.

    Read the article

  • Group traffic shaping with traffic control?

    - by mmcbro
    I'm trying to limit the output bandwidth generated by an application with linux tc. This application sends me the source port of the request that I use has a filter to limit each user at a given downloadspeed. I feel that my setup could be managed way better if I had a better knowledge of linux tc. At the application level users are categorized as members of a group, each group have a limited bandwidth. Example : Members of group A : 512kbit/s Members of group B : 1Mbit/s Members of group C : 2Mbit/s When a user connects to the application, it retrieves the source port to the origin of the request from the user and sends me the source port and the bandwidth at which the user must be limited depending on group to which it belongs. With these informations I must add the appropriate rules so that the user (the source port in reality) is limited to the right bandwidth. If the user that connect isn't a member of any group it should be limited at a default bandwidth speed. I'm actually managing this by using a self made daemon that add or remove rules from when it receive a request from the application. With my little knowledge of tc I'm not able to limit other users (ones that aren't in a group, all others in fact) at a default speed and my configuration seems awful to me. Here is the base of my tc qdisc and classes : tc qdisc add dev eth0 root handle 1: htb tc class add dev eth0 parent 1: classid 1:1 htb rate 100mbps ceil 125mbps To classify a user at a given speed I have to add one subclass and then associate one filter to it : # a member of group A tc class add dev eth0 parent 1:1 classid 1:11 htb rate 512kbps ceil 512kbps # tts associated filter to match his source port tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip sport 50001 flowid 1:11 # a member of group A again tc class add dev eth0 parent 1:1 classid 1:12 htb rate 512kbps ceil 512kbps # tts associated filter to match his source port tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip sport 61524 flowid 1:12 # a member of group B again tc class add dev eth0 parent 1:1 classid 1:13 htb rate 1000kbps ceil 1000kbps # tts associated filter to match his source port tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip sport 57200 flowid 1:13 I already know that a source port could be the same if its coming from a different IP address the thing is the application is behind a proxy so I don't have to manage any IP address in that situation. I would like to know how to manage the fact that for all other users (request/source port, whatever you name it) could be limited at a given speed each. I mean that each connection should be able to use at max 100kbit/s for example, not a shared 100kbit/s. I also would like to know if there is a way to simplify my rules. I don't know if it is possible to use only one class per group and associate multiple filters to the same class so each users could be handled by one class and not one class per user. I appreciate any advice, thanks.

    Read the article

  • Windows doesn't get access to internet though linux easily does

    - by flashnik
    We have a very interesting problem. The network is configured in this way: internet is connected to Trendnet switch TS DHCP server at 192.168.0.1 running on Ubuntu (S) is connected to internet switch DNS is also configured on 192.168.0.1 on S D-Link Wi-Fi boosters are connected to switch TS PCs use D-Link PCI-E Wi-Fi cards to get access to network PCs have both Ubuntu and Windows 7 There are about 40 PCs. When PC is booted to Ubuntu it easily gets access to internet. But when it's booted to Windows 7, it gets a valid IP-address, but doesn't get access to internet. The address, mask, DNS, GW-address are totally the same as when it's booted under Ubuntu. The S is reacheble and pingable. Sometimes when we are lucky the PC gets access to Internet, but after rebooting it can lose it. When PC under Windows has access, it has totally the same settings as when it doesn't. What can be done? UPDATE I shared a dropbox with 2 captures of traffic. Ping.pcap is a capture of pinging 8.8.8.8. And google-browser.pcap is a capture of opening a google.com in a browser, both of them are in tcpdump formats and made by Wireshark on Win PC. The MAC of Win PC ends on b7:63 and IP is 192.168.0.130. UPDATE2 This is ifconfig output from Ubuntu Server eth0 Link encap:Ethernet HWaddr 00:1e:67:13:d5:8d inet addr:193.200.211.74 Bcast:193.200.211.78 Mask:255.255.255.0 inet6 addr: fe80::21e:67ff:fe13:d58d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:196284 errors:0 dropped:44 overruns:0 frame:0 TX packets:190682 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:158032255 (158.0 MB) TX bytes:156441225 (156.4 MB) Interrupt:19 Memory:c1400000-c1420000 eth0:2 Link encap:Ethernet HWaddr 00:1e:67:13:d5:8d inet addr:192.168.0.1 Bcast:192.168.0.254 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:19 Memory:c1400000-c1420000 eth1 Link encap:Ethernet HWaddr 00:1e:67:13:d5:8c UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:16 Memory:c1300000-c1320000 nslookup from Win results in DNS request timeout, nbtstat in 'not found'.

    Read the article

  • How do you splice out a part of an xvid encoded avi file, with ffmpeg? (no problems with other files)

    - by user11955
    Im using the following command, which works for most files, except what seems to be xvid encoded ones /usr/bin/ffmpeg -sameq -i file.avi -ss 00:01:00 -t 00:00:30 -ac 2 -r 25 -copyts output.avi So this should basically splice out 30 seconds of video + audio, starting from 1 minute mark. It does START encoding at the 00:01:00 mark but it goes all the way to the end of the file for some reason, ignoring that I want just 30 seconds. The output looks like this. FFmpeg version git-ecc4bdd, Copyright (c) 2000-2010 the FFmpeg developers built on May 31 2010 04:52:24 with gcc 4.4.3 20100127 (Red Hat 4.4.3-4) configuration: --enable-libx264 --enable-libxvid --enable-libmp3lame --enable-libopenjpeg --enable-libfaac --enable-libvorbis --enable-gpl --enable-nonfree --enable-libxvid --enable-pthreads --enable-libfaad --extra-cflags=-fPIC --enable-postproc --enable-libtheora --enable-libvorbis --enable-shared libavutil 50.15. 2 / 50.15. 2 libavcodec 52.67. 0 / 52.67. 0 libavformat 52.62. 0 / 52.62. 0 libavdevice 52. 2. 0 / 52. 2. 0 libavfilter 1.20. 0 / 1.20. 0 libswscale 0.10. 0 / 0.10. 0 libpostproc 51. 2. 0 / 51. 2. 0 [mpeg4 @ 0x17cf770]Invalid and inefficient vfw-avi packed B frames detected Input #0, avi, from 'file.avi': Metadata: ISFT : VirtualDubMod 1.5.10.2 (build 2540/release) Duration: 00:02:00.00, start: 0.000000, bitrate: 1587 kb/s Stream #0.0: Video: mpeg4, yuv420p, 672x368 [PAR 1:1 DAR 42:23], 25 tbr, 25 tbn, 25 tbc Stream #0.1: Audio: ac3, 48000 Hz, 5.1, s16, 448 kb/s File 'lol6.avi' already exists. Overwrite ? [y/N] y Output #0, avi, to 'lol6.avi': Metadata: ISFT : Lavf52.62.0 Stream #0.0: Video: mpeg4, yuv420p, 672x368 [PAR 1:1 DAR 42:23], q=2-31, 200 kb/s, 25 tbn, 25 tbc Stream #0.1: Audio: mp2, 48000 Hz, 2 channels, s16, 64 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop encoding [mpeg4 @ 0x17cf770]Invalid and inefficient vfw-avi packed B frames detected [buffer @ 0x184b610]Buffering several frames is not supported. Please consume all available frames before adding a new one. frame= 1501 fps=104 q=0.0 Lsize= 15612kB time=30.02 bitrate=4259.7kbits/s ts/s video:15303kB audio:235kB global headers:0kB muxing overhead 0.482620% if I convert this file to mp4 for example, and then perform the same action, it works perfectly.

    Read the article

  • DNS Issue Windows 2003 AD-The server holding the PDC role is down

    - by Dave M
    Our network of Windows 2003 and Windows 2008 servers suddenly hasDNS issues. There are 7 DCs. Two at our main office and one each at branch sites (one branch has two a 2008R2 and WIN2K3) Only two are WIN2008R2 Running DCDIAG on the WIN2K3 at main site (DC1) reports no issues. Running at any branch site reports two issues All other test pass. The server DC1 can be PINGed by name from any site Starting test: frsevent There are warning or error events within the last 24 hours after the SYSVOL has been shared. Failing SYSVOL replication problems may cause Group Policy problems. Starting test: FsmoCheck Warning: DcGetDcName(PDC_REQUIRED) call failed, error 1355 A Primary Domain Controller could not be located. The server holding the PDC role is down. Netdom.exe /query DC reports the expected servers. netdom query fsmo This reports the server at the main office holds the following roles: * Schema owner Domain role owner PDC role RID pool manager Infrastructure owner In the DNS management snap-in, DC1 appears as DNS server but does not appear in _msdcs-dc-_sites-Default-First-Site-Name-_TCP There is no _ldap or –kerberos record pointing to DC1 Same issue msdcs-dc-_sites- -_TCP Again there is no _ldap or –kerberos record pointing to DC1 Under Domain DNS Zones there is no entry for the server. This is the case for any _tcp folder in the DNS. The server DC1 appears correctly as a name server in the Reverse Lookup Zone. There is a Host(A) record for DC1 but in the Forward Lookup Zone there is no (same as parent folder) Host(A) for the DC1 server but such an entry exists for the other DCs at branch sites and the other DC at the main office. We have tried stopping and starting the netlogon service, restarting DNS and also dcdiag /fix. Netdiag reports error: Trust relationship test. . . . . . : Failed [FATAL] Secure channel to domain 'XXX' is broken. [ERROR_NO_LOGON_SERVERS] [WARNING] Failed to query SPN registration on DC- One entry for each branch DC All braches lsit the problem server and it can be Pinged by name from any branch Fixing is number one priority but also would like to determine the casue.

    Read the article

  • Exchange-Server Query

    - by Rudi Kershaw
    First, a little background. I've recently been taken on as a web and software developer for a small company, who has no other in-house IT support. They've been asking my opinion on lots of IT subjects that are quite far out of my comfort zone. I'm definitely not a network admin. Their IT consultancy contractor is pushing them to upgrade their dedicated exchange server, even though it seems like the one they currently have has a lot of life left in it and is running problem free. They say it's "coming to the natural end of it's life". They want to install a monster with a Xeon E5-2420, 32GB RAM, 2x 1TB HDDs, Windows Server 2012 and Microsoft Exchange 2010. They want to charge a small fortune for it. Basically, this system seems massively over the top seeing as it won't be doing anything else other than running as an exchange server for a company with less than 25 email accounts. My employers also have a file server system in-house that hosts three web apps, an SQL server, their local domain, print server and shared folders. That machine is using the same specs as the proposed new one, and it is barely using any of it's potential. I asked if Microsoft Exchange 2010 could be installed on their file server, but they said that MS Exchange can't run on the same system as an SQL server because for some reason they will eat up each others resources (even though the SQL server isn't touching 1% of the current system's CPU or RAM). My question is really, are they trying to rip my employers off? Could MS Exchange be installed on their other server (on a virtual instance or not), or does the old one even need replacing at all? Going with their current suggestion will cost the company in excess of £6k, and it seems entirely unnecessary. I apologies, because I know this is probably a little thin on details, but if I carry on I could end up writing a massive essay that no-one will want to read. I've been doing my research, but I'm not knowledgeable enough make any hard decisions. Let me know if you need any more details. Thank you for any help you can offer. Further Details: The new exchange would need to support Outlook Web App, 25 users, a few public mailboxes, and email exchange with Blackberries.

    Read the article

  • Sycronizing/deploying scripts across several systems

    - by otto
    I have a few time consuming tasks that I like to spread across several computers. These tasks require running an identical ruby or python script (or series of scripts that call each other) on each machine. The machines will a separate config file telling the script what portion of the task to complete. I want to figure out the best way to syncronize the scripts on these machines prior to running them. Up until now, I have been making changes to a copy of the script on a network share and then copying a fresh copy to each machine when I want to run it. But this is cumbersome and leaves a chance for error ( e.g missing a file on the copy or not clicking "copy and replace"). Lets assume the systems are standard windows machines that are not dedicated to this task and I don't need to run these scripts all the time (so I don't want a solution that runs 24/7 and always keeps them up to date, I'd prefer something that pushes/pulls on command). My thoughts on various options: Simple adaptation of my current workflow: Keep the originals on the network drive, but write a batch file that copies over the latest version of the scripts so everything is a one-click operation. Requires action on each system, but that's not the end of the world (since each one usually needs their configuration file changed slightly too). Put everything in a Mercurial/Git reposotory and pull a fresh copy onto each node. Going straight to the repo from each machine would guarantee a current version (and would have the fringe benefit of allowing edits to the script to be made from any machine). Cons would be that it requires VCS to be installed on each machine and there might be some pains dealing with authentication since I wouldn't use a public repo. Open up write access on a shared folder and write a script to use rsync (or similar) to push the changes out to all of the machines at once. This gets a current version on every machine (though you would have to change the script if you want to omit a machine or add a new one). Possible issue would be that each computer has to allow write access. Dropbox is a reasonable suggestion (and could work well) but I dont want to use an external service and I'd prefer not to have to have dropbox running 24/7 on systems that would normally not need it. Is there something simple that I am missing? Some tool designed expressly for doing this kind of thing? Otherwise I am leaning toward just tying all of the systems into Mercurial since, while it requires extra software, it is a little more robust than writing a batch file (e.g. if I split part of a script into a separate module, Mercurial will know what to do whereas I would have to add a line to the batch file).

    Read the article

  • Linux Best Practices

    - by Zac
    I'm a life-long Windows developer switching over to Linux for the first time, and I'm starting off with Ubuntu to ease the learning curve. My new laptop will primarily be a development machine: 6GB RAM, 320 GB HD. I'd like there to be 2 non-root users: (a) Development, which will always be me, and (b) Guest, for anyone else. I assume the root user is added by default, like System Administrator in Windows. (1) I'd like to mount /home to its own partition, but how does this work if I have two user accounts (Development and Guest)? Are there 2 separate /home directories, or do they get shared? Is it possible to allocate more space for Development and only a tiny bit of space for Guest in GRUB2? How?!?! (2) I'm assuming that its okay that all of my development tools (Eclipse & plugins, SVN, JUnit, ant, etc.) and Java will end up getting installed in non-/home directories such as /usr and /opt, but that my Eclipse/SVN workspace will live under my /home directory on a separate partition... any problems, issues, concerns with that? (3) As far as partitioning schemes, nothing too complicated, but not plain Jane either: Boot Partition, 512 MB, in case I want to install other OSes Ubuntu & non-/home file system, 187.5 GB Swap Partition, 12 GB = RAM x 2 /home Partition, 120 GB I don't have any bulky media data (I don't have music or video libraries, this is a lean and mean dev machine) so having 320 GB is like winning the lottery and not knowing what to do with all this space. I figured I'd give a little extra space to the OS/FS partition since I'll be running JEE containers locally and doing a lot of file IO, logging and other memory-instensive operations. Any issues, problems, concerns, suggestions? (4) I was thinking about using ext4; seems to have good filestamping without any space ceiling for me to hit. Any other suggestions for a dev machine? (5) I read somewhere that you need to be careful when you install software as the root user, but I can't remember why. What general caveats do I need to be aware of when doing things (installing packages, making system configurations, etc.) as root vs "Development" user? Thanks!

    Read the article

  • SQL Transactional Replication snapshot not applying

    - by dmch2
    Hi, I'm using SQL Transactional Replication with pull subscriptions to replicate databases (hosting their own distribution database) from several servers across a VPN to a central server. I've got the first 2 databases working fine but the 3rd one is causing me problems. My subscription server is SQL 2008, the source systems are all SQL 2005. The source databases are a few 100Mb in size and contain audit data so are simply growing slowly by adding new records at approx 1kb a second. As far as the replication monitor, Agent logs and event logs show everything is working fine - except that no data appears in my subscription database. The distribution agent doesn't seem to want to read the snapshot (and hence the initial state and schema) from the publisher. New transactions aren't applied although they do seem to be arriving OK as the replication monitor shows things like '5 transactions with 10 commands were delivered'. I would expect (as in previous times) to see statements about data being BCPed in the replication monitor. The snapshot is on the publisher on a shared folder. The subscriber can view the snapshot OK (\\repldata) and the alt snapshot folder is pointing at it. But the distribution agent doesn't seem to be making an attempt to do read it. I tried changing the snapshot path to something that's incorrect and didn't even get an error saying that it couldn't access it. After lots of googling etc I found that sp_MSget_repl_commands is called by the subscriber on the distribution database on the publisher. Running a profiler I can see that it's only called for one agent Id. After a reinit it's called for sequence number 0x0 as expected so I thought that would mean it's would look for the snapshot. However, looking on the publisher I see that there's data for two agents - the snapshot agent and the log reader agent (which is being queries). So I guess I need to tell the distribution agent to get the data for both. But how? and more importantly - why? It worked fine on the other two servers I've replicated. I'm not an SQL novice but this is pretty much my first go at replication so don't be afraid to accuse me of missing something obvious/stupid! I can get log files (eg from the distribution agent) if you want but they don't seem to have any errors in them - it just starts up and starts applying log reader agent changes. Cheers Dave

    Read the article

  • Single domain name potentially resolving to multiple servers

    - by Jace
    first time here at Server Fault, and I apologize in advance that this domain stuff is not really my strength. Any and all suggestions are much appreciated. I am completely lost and incredibly tired! I've inherited an incredibly convoluted system from my predecessor, and I'm trying to find a way to solve it - or I need to be told that it just isn't possible. I've got an old site on ServerA (some kind of Linux distribution), with the domain SomeDomain.com There is a new site sitting on ServerB (Ubuntu), with the intention of having SomeDomain.com to serve it in the future (it is replacing the old site) ServerA also has a web app that is currently in use by other departments within the company (accessible at SomeDomain.com/web-app/) The goal: To have SomeDomain.com and all extensions of this domain name (sub-domains, URL's etc.) serve the new site on ServerB. BUT, the URL SomeDomain.com/web-app/ must serve the Web App on ServerA. The Catch: The ServerA is a shared server with a hosting company with VERY limiting restrictions in place - I cannot adjust DNS settings (apart from Name servers - but cannot set A records or anything, I have full access to ServerB to do as I wish). Therefore the web-app MUST be served from SomeDomain.com/web-app/ and not from a sub-domain or anything. These limitations make migrating the web-app from Server A to Server B rather undesirable, AND this web-app will be replaced in the near future, so it isn't worth the effort right now. Therefore, ultimately I will want 1 domain name to resolve to Server B's IP address most of the time, but in the event that the URL is SomeDomain.com/web-app/, it should resolve to Server A's IP. Note: The domain names don't, technically, have to resolve to one IP or another - but ultimately the URL's must stay consistent Some things I have tried: I've looked into mod_rewrite and .htaccess to try and achieve this effect, but it doesn't look like it's going to work for me - but I may have done it wrong (On Server B, I just checked if the request URI was /web-app/ and tried to serve the /web-app/ folder on Server A) I do have the ability to modify the name servers on both servers I am not able to make a sub domain on Server A that points back to Server A (I assume because the hosting company's servers use the URL to determine what site the serve). I figured this could be good as I'd could set an A record on Server B to point to the web app on Server A - but alas, Server A requires SomeDomain.com. If there is any more information I can give, please let me know. I need a nudge in the right direction, ideas or a solution.

    Read the article

  • How to run a restricted set of programs with Administrator privileges without giving up Admin acces (Win7 Pro)

    - by frLich
    I have a shared system, running Windows7 X64, restricted to a 'standard user' with no password. Not everyone who has access to the system has the administrator password. This works rather well, except for some applications - specially the unlock-applications for encrypted hard drives/USB flash drives. The specific ones either require Administrator access (eg. Seagate Blackarmor) or simply fail without it -- since these programs are sending raw commands to a device, this is to be expected. I would like to be able to add the hashes of these particular programs to a whitelist, and have them run as administrator without needing any prompts. Since these are by definition on removable media, I can't simply use a filename or even a path. One of the users who shares the system can be considered 'crafty', so anything which temporarily grants administrator rights to an user account is certain to cause problems. What i'd like to be able to do: 1) Create an admin account that can only run programs from a whitelist (or, failing that, from a directory) I can't find a good way to do this: As far as I can tell, SRP applies equally to ALL users? Even if I put a "Deny" token on all directories on the system, such that new directories would inherit it, it could still potentially run things from the mounted USB devices. I also don't know whether it's possible to create a new directory that DOESN'T inherit from the parent, that would lake the deny token, and provide admin access. 2) Find a lightweight service that will run these programs in its local context Windows7 seems to block cross-privilege level communication by default, and I haven't found such for windows 7. One example seems to be "sudo" (http://pages.cpsc.ucalgary.ca/~nfriess/sudo/) but because it uses a WLNOTIFY hook, it won't work under Vista nor Windows7 Non-Solutions: - RunAs: Requires administrator password! (but everyone calls it "sudo" anyway) - RunAs /savecred: Nice idea, but appears to be completely insecure. - RUNASSPC - Same concept as RunAs, uses "encrypted" files with credentials, but checks in user-space. - Scheduled Tasks - "Fixed" permissions make this difficult, and doesn't support interactive processes even if it did. - SuRun: From Google: "Surun uses its own Windows service that adds the user to the group of administrators during program start and removes him automatically from that group again"

    Read the article

  • My VPS ubuntu server is very slow

    - by askmike
    I just installed a frech copy of Ubuntu 12.04 on my vps because my old installation was very slow, unfortunately this did not fix the problem. With slow I mean requests for my PHP websites take a long time, very slow (30 sec per request) to slow (3+ sec per request). When it's really bad SSH is also laggish. The websites are: askmike.org (pretty standard Wordpress) mvr.me (own PHP) slow? very slow: Here is a picture of loading a clean install of wordpress slow: here is a picture of loading a small PHP based website the vps The VPS has 256mb ram and an 25GB hdd. Besides serving the 2 small websites it isn't doing anything AFAIK. What have I installed Clean Ubuntu server 12.04 LAMP stack few things like git and nodejs (not using both) ossec (because I thought my server was getting hammered) munin What I already tried / done I installed munin so that I could watch io speed and such. The problem is that I don't know where to look for in the munin report. I checked logs and don't see anything strange (although I don't really know where to look for besides strange / repetitive errors and GET requests). I configured Apache MPM to: <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 40 MaxRequestsPerChild 0 </IfModule> (apache is using prefork, the default) Stats I copied the munin report as it appeared at 4:50 last night to a site hosted on a shared webhost. Note that tonight my mysql crashed somewhere after 1:00 (which is a new problem altogether), so therefor the graph for last night might look strange. Can anyone help me get my VPS up to normal speed? EDIT: Thanks for the replies. The VPS is 10 bucks a month and is from directvps.nl (Dutch host and I'm also dutch). I did two speed tests for disk IO: $ dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 1073741824 bytes (1.1 GB) copied, 23.1506 s, 46.4 MB/s $ dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 1073741824 bytes (1.1 GB) copied, 39.3796 s, 27.3 MB/s Anyway: how can I prove to my VPS host that it is to slow? I can understand a server being busy slowing a website down. But 5-30 sec loadtime for a normal PHP webpage?

    Read the article

  • Why does this service refuse to start on Windows server 2003?

    - by PenguinCoder
    We have a Windows 2003 server with Cebos MQ1 (ver. 7 and ver. GRI) products installed that have been operational for years. After installing Microsoft 2010 C++ Redistributable package needed for other development, the MQ1 GRI service now fails to start. Event logs showed that two additional updates (.NET4 and the 2010 C++ Redistributable SP2) where installed by the redistributable as well. As soon as we discovered the MQ1 service was not starting properly, we removed these three installed packages. However the service still does not start; the dialog that pops up states 'The service started then stopped. '. Event logs when we attempt to start the service show nothing; IE: No errors, crashes, failures, or other information related to this service. Executing the MQ1Serv.exe directly specifies an issue of 'Missing command line operation, must specify install, uninstall and company abbreviation.' sc query MQ1Service(GRI) shows a clean exit for the Win32ExitCode of 0x0. Attempting to reinstall the client or server software gives an error of 'The procedure entry point ReInitializeCriticalSection could not be located in the dynamic link library KERNEL32.dll.' at the 'Registering Libraries' stage. At this point, further research has stated that the required function is in URL.dll and to verify the library is not corrupted. Running an sfc /scannow on the server has replaced a few DLLS; including the URL.DLL to versions from 2005. This actually broke other applications which required a reinstall (one of them being IE 7). After reinstall and updates, url.dll version is 7.0.5730.13 (2009) and Kernel32.dll is version 5.2.3790.4480 (2009). The MQ1 GRI service still will not start, specifying the same error as previous 'Service started then stopped'. Running a disassembler on Kernel32.dll and Url.dll show no functions named ReinitializeCriticalSection. Attempting the reinstall of the MQ1 client and server as well as starting the service again, fails once more. However, setting the compatibility mode on the MQ1 client install exe to 'Windows 95' actually gets the program to install. Setting the compatibility mode on the MQ1 server service does not enable it to start. I have been researching this problem for nearly a week and besides the advice to scan and replace url.dll, have come to no successful conclusions. This service was operational prior to the 2010 C++ install, without any additional parameters or settings. After removing the C++ install and all servicepacks/updates it installed silently, still does not correct the issue of the MQ1 GRI service not starting. Q: Has anyone else run into this or similar issue while attempting to get a service initialized? What have I overlooked or what else can I try in order to get this service started??

    Read the article

  • Linux: find out what process is using all the RAM?

    - by Timur
    Before actually asking, just to be clear: yes, I know about disk cache, and no, it is not my case :) Sorry, for this preamble :) I'm using CentOS 5. Every application in the system is swapping heavily, and the system is very slow. When I do free -m, here is what I got: total used free shared buffers cached Mem: 3952 3929 22 0 1 18 -/+ buffers/cache: 3909 42 Swap: 16383 46 16337 So, I actually have only 42 Mb to use! As far as I understand, -/+ buffers/cache actually doesn't count the disk cache, so I indeed only have 42 Mb, right? I thought, I might be wrong, so I tried to switch off the disk caching and it had no effect - the picture remained the same. So, I decided to find out who is using all my RAM, and I used top for that. But, apparently, it reports that no process is using my RAM. The only process in my top is MySQL, but it is using 0.1% of RAM and 400Mb of swap. Same picture when I try to run other services or applications - all go in swap, top shows that MEM is not used (0.1% maximum for any process). top - 15:09:00 up 2:09, 2 users, load average: 0.02, 0.16, 0.11 Tasks: 112 total, 1 running, 111 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4046868k total, 4001368k used, 45500k free, 748k buffers Swap: 16777208k total, 68840k used, 16708368k free, 16632k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ SWAP COMMAND 3214 ntp 15 0 23412 5044 3916 S 0.0 0.1 0:00.00 17m ntpd 2319 root 5 -10 12648 4460 3184 S 0.0 0.1 0:00.00 8188 iscsid 2168 root RT 0 22120 3692 2848 S 0.0 0.1 0:00.00 17m multipathd 5113 mysql 18 0 474m 2356 856 S 0.0 0.1 0:00.11 472m mysqld 4106 root 34 19 251m 1944 1360 S 0.0 0.0 0:00.11 249m yum-updatesd 4109 root 15 0 90152 1904 1772 S 0.0 0.0 0:00.18 86m sshd 5175 root 15 0 90156 1896 1772 S 0.0 0.0 0:00.02 86m sshd Restart doesn't help, and, by they way is very slow, which I wouldn't normally expect on this machine (4 cores, 4Gb RAM, RAID1). So, with that - I'm pretty sure that this is not a disk cache, who is using the RAM, because normally it should have been reduced and let other processes to use RAM, rather then go to swap. So, finally, the question is - if someone has any ideas how to find out what process is actually using the memory so heavily?

    Read the article

  • Can't install .NET framework 4.0 on Windows XP professional version 2002 SP3 (OS bug?)

    - by that guy
    .NET framework 4.0 install fails on Windows XP professional version 2002 SP3: I tried to run setup using "run as..." to make sure the admin rights are used ("protect my computer..." tick was deselected of course). I tried everything: installing using online/offline setup, windows update. install goes a little and then "rolls back" and says: Installation did not succeed .NET Framework 4 has not been installed because: Fatal error during installation. for more information about this problem, see the log file. the full log: http://pastebay.net/1433771 Any ideas? EDIT1: I have found this in the log: "BlockIf: You must install the 32-bit Windows Imaging Component (WIC) before you run Setup. Please visit the Microsoft Download Center to install WIC, and then rerun Setup...." So I found it, and launched "wic_x86_enu.exe" - but it said: WIC Setup error Newer version of update is already on the system. I have already installed: .NET framewrok 2.0 SP2 .NET framewrok 3.0 SP2 .NET framewrok 3.5 SP1 but I need 4.0 . EDIT2: another attempt and it's log. (this time better copy of log file): http://pastebin.com/gmGfbM9a (copy to notepad and save as .htm and open with internet browser). I have tried all the solutions I could find - and nothing helped. I have found something weird: when I formatted the hard drive and installed windows xp again - the .NET framework 4.0 installed ok, but when I plugged my 100Mbit internet cable - the operating system kind off "locked itself" and the bug returned - I could no longer install .NET framework 4.0 again. There was no reason for that to happen, for example I have windows server 2003 in local network, but I don't have active directory enabled on it or anything like that - the server just has some folders shared and thats all (all server's "features" are default). I had the second pc with the same problem - with XP on it too. This seems like the bug of Operating System to me. I couldn't find what was causing the problem. After many days I gave up: backuped everything, formatted HDD and installed Windows 7 professional 64bit. .NET framework 4.0 installed with no problem on it.

    Read the article

  • How to run a restricted set of programs with Administrator privileges without giving up Admin acces (Win7 Pro)

    - by frLich
    I have a shared system, running Windows7 X64, restricted to a 'standard user' with no password. Not everyone who has access to the system has the administrator password. This works rather well, except for some applications - specially the unlock-applications for encrypted hard drives/USB flash drives. The specific ones either require Administrator access (eg. Seagate Blackarmor) or simply fail without it -- since these programs are sending raw commands to a device, this is to be expected. I would like to be able to add the hashes of these particular programs to a whitelist, and have them run as administrator without needing any prompts. Since these are by definition on removable media, I can't simply use a filename or even a path. One of the users who shares the system can be considered 'crafty', so anything which temporarily grants administrator rights to an user account is certain to cause problems. What i'd like to be able to do: 1) Create an admin account that can only run programs from a whitelist (or, failing that, from a directory) I can't find a good way to do this: As far as I can tell, SRP applies equally to ALL users? Even if I put a "Deny" token on all directories on the system, such that new directories would inherit it, it could still potentially run things from the mounted USB devices. I also don't know whether it's possible to create a new directory that DOESN'T inherit from the parent, that would lake the deny token, and provide admin access. 2) Find a lightweight service that will run these programs in its local context Windows7 seems to block cross-privilege level communication by default, and I haven't found such for windows 7. One example seems to be "sudo" (http://pages.cpsc.ucalgary.ca/~nfriess/sudo/) but because it uses a WLNOTIFY hook, it won't work under Vista nor Windows7 Non-Solutions: - RunAs: Requires administrator password! (but everyone calls it "sudo" anyway) - RunAs /savecred: Nice idea, but appears to be completely insecure. - RUNASSPC - Same concept as RunAs, uses "encrypted" files with credentials, but checks in user-space. - Scheduled Tasks - "Fixed" permissions make this difficult, and doesn't support interactive processes even if it did. - SuRun: From Google: "Surun uses its own Windows service that adds the user to the group of administrators during program start and removes him automatically from that group again"

    Read the article

  • Cloud storage provider lost my data. How to back up next time?

    - by tomcam
    What do you do when cloud storage fails you? First, some background. A popular cloud storage provider (rhymes with Booger Link) damaged a bunch of my data. Getting it back was an uphill battle with all the usual accusations that it was my fault, etc. Finally I got the data back. Yes, I can back this up with evidence. Idiotically, I stayed with them, so I totally get that the rest of this is on me. The problem had been with a shared folder that works with all 12 computers my business and family use with the service. We'll call that folder the Tragic Briefcase. It is a sort of global folder that's publicly visible to all computers on the service. It's our main repository. Today I decided to deal with some residual effects of the Crash of '11. Part of the damage they did was that in just one of my computers (my primary, of course) all the documents in the Tragic Briefcase were duplicated in the Windows My Documents folder. I finally started deleting them. But guess what. Though they appeared to be duplicated in the file system, removing them from My Documents on the primary PC caused them to disappear from the Tragic Briefcase too. They efficiently disappeared from all the other computers' Tragic Briefcases as well. So now, 21 gigs of files are gone, and of course I don't know which ones. I want to avoid this in the future. Apart from using a different storage provider, the bigger picture is this: how do I back up my cloud data? A complete backup every week or so from web to local storage would cause me to exceed my ISP's bandwidth. Do I need to back up each of my 12 PCs locally? I do use Backupify for my primary Google Docs, but I have been storing taxes, confidential documents, Photoshop source, video source files, and so on using the web service. So it's a lot of data, but I need to keep it safe. Backup locally would also mean 2 backup drives or some kind of RAID per PC, right, because you can't trust a single point of failure? Assuming I move to DropBox or something of its ilk, what is the best way to make sure that if the next cloud storage provider messes up I can restore?

    Read the article

  • Sharepoint Error: [COMException (0x80004005): Cannot complete this action.

    - by ifunky
    Hi, I've created a site basic definition that uses different master and default pages. Everything works quite well except for whenever I create a new site based on the definition I receive the following error when browsing to the new site: [COMException (0x80004005): Cannot complete this action. Please try again.] Please try again.] Microsoft.SharePoint.Library.SPRequestInternalClass.GetFileAndMetaInfo(String bstrUrl, Byte bPageView, Byte bPageMode, Byte bGetBuildDependencySet, String bstrCurrentFolderUrl, Boolean& pbCanCustomizePages, Boolean& pbCanPersonalizeWebParts, Boolean& pbCanAddDeleteWebParts, Boolean& pbGhostedDocument, Boolean& pbDefaultToPersonal, String& pbstrSiteRoot, Guid& pgSiteId, UInt32& pdwVersion, String& pbstrTimeLastModified, String& pbstrContent, Byte& pVerGhostedSetupPath, UInt32& pdwPartCount, Object& pvarMetaData, Object& pvarMultipleMeetingDoclibRootFolders, String& pbstrRedirectUrl, Boolean& pbObjectIsList, Guid& pgListId, UInt32& pdwItemId, Int64& pllListFlags, Boolean& pbAccessDenied, Guid& pgDocId, Byte& piLevel, UInt64& ppermMask, Object& pvarBuildDependencySet, UInt32& pdwNumBuildDependencies, Object& pvarBuildDependencies, String& pbstrFolderUrl, String& pbstrContentTypeOrder) +0 Microsoft.SharePoint.Library.SPRequest.GetFileAndMetaInfo(String bstrUrl, Byte bPageView, Byte bPageMode, Byte bGetBuildDependencySet, String bstrCurrentFolderUrl, Boolean& pbCanCustomizePages, Boolean& pbCanPersonalizeWebParts, Boolean& pbCanAddDeleteWebParts, Boolean& pbGhostedDocument, Boolean& pbDefaultToPersonal, String& pbstrSiteRoot, Guid& pgSiteId, UInt32& pdwVersion, String& pbstrTimeLastModified, String& pbstrContent, Byte& pVerGhostedSetupPath, UInt32& pdwPartCount, Object& pvarMetaData, Object& pvarMultipleMeetingDoclibRootFolders, String& pbstrRedirectUrl, Boolean& pbObjectIsList, Guid& pgListId, UInt32& pdwItemId, Int64& pllListFlags, Boolean& pbAccessDenied, Guid& pgDocId, Byte& piLevel, UInt64& ppermMask, Object& pvarBuildDependencySet, UInt32& pdwNumBuildDependencies, Object& pvarBuildDependencies, String& pbstrFolderUrl, String& pbstrContentTypeOrder) +219 [SPException: Cannot complete this action. Please try again.] I'm able to work around this by checking out the new master page and checking it back in again and after doing so there are no further issues at all. Any ideas to what could cause this? Thanks Dan ONET.XML module section: <Modules> <Module Name="CustomMasterPage" List="116" Url="_catalogs/masterpage" RootWebOnly="FALSE"> <File Url="Shoes.master" Type="GhostableInLibrary" IgnoreIfAlreadyExists="TRUE" /> </Module> <Module Name="Default" List="116" Url=""> <File Url="default.aspx" Name="default.aspx" NavBarHome="True" IgnoreIfAlreadyExists="FALSE"> <AllUsersWebPart WebPartZoneID="Left" WebPartOrder="1"> &lt;webParts&gt;&lt;webPart xmlns="http://schemas.microsoft.com/WebPart/v3"&gt;&lt;metaData&gt;&lt;type name="BCM.SharePoint.Shoes.ShoesComponents.FooterLinks, BCM.SharePoint.Shoes.ShoesComponents, Version=1.0.0.0, Culture=neutral, PublicKeyToken=2881713f39360b71" /&gt;&lt;importErrorMessage&gt;Cannot import this Web Part.&lt;/importErrorMessage&gt;&lt;/metaData&gt;&lt;data&gt;&lt;properties&gt;&lt;property name="AllowClose" type="bool"&gt;True&lt;/property&gt;&lt;property name="Width" type="string" /&gt;&lt;property name="MyProperty" type="string"&gt;Hello SharePoint&lt;/property&gt;&lt;property name="AllowMinimize" type="bool"&gt;True&lt;/property&gt;&lt;property name="AllowConnect" type="bool"&gt;True&lt;/property&gt;&lt;property name="ChromeType" type="chrometype"&gt;None&lt;/property&gt;&lt;property name="TitleIconImageUrl" type="string"&gt;/_layouts/images/BCM_SharePoint_Shoes/wp_FooterLinks.gif&lt;/property&gt;&lt;property name="Description" type="string"&gt;FooterLinks Description&lt;/property&gt;&lt;property name="Hidden" type="bool"&gt;False&lt;/property&gt;&lt;property name="TitleUrl" type="string" /&gt;&lt;property name="AllowEdit" type="bool"&gt;True&lt;/property&gt;&lt;property name="Height" type="string" /&gt;&lt;property name="MissingAssembly" type="string"&gt;Cannot import this Web Part.&lt;/property&gt;&lt;property name="HelpUrl" type="string" /&gt;&lt;property name="Title" type="string" /&gt;&lt;property name="CatalogIconImageUrl" type="string"&gt;/_layouts/images/BCM_SharePoint_Shoes/wp_FooterLinks.gif&lt;/property&gt;&lt;property name="Direction" type="direction"&gt;NotSet&lt;/property&gt;&lt;property name="ChromeState" type="chromestate"&gt;Normal&lt;/property&gt;&lt;property name="AllowZoneChange" type="bool"&gt;True&lt;/property&gt;&lt;property name="AllowHide" type="bool"&gt;True&lt;/property&gt;&lt;property name="HelpMode" type="helpmode"&gt;Modeless&lt;/property&gt;&lt;property name="ExportMode" type="exportmode"&gt;All&lt;/property&gt;&lt;/properties&gt;&lt;/data&gt;&lt;/webPart&gt;&lt;/webParts&gt; </AllUsersWebPart> <AllUsersWebPart WebPartZoneID="Left" WebPartOrder="0"> &lt;webParts&gt;&lt;webPart xmlns="http://schemas.microsoft.com/WebPart/v3"&gt;&lt;metaData&gt;&lt;type name="BCM.SharePoint.Shoes.ShoesComponents.SubFooterLinks, BCM.SharePoint.Shoes.ShoesComponents, Version=1.0.0.0, Culture=neutral, PublicKeyToken=2881713f39360b71" /&gt;&lt;importErrorMessage&gt;Cannot import this Web Part.&lt;/importErrorMessage&gt;&lt;/metaData&gt;&lt;data&gt;&lt;properties&gt;&lt;property name="AllowClose" type="bool"&gt;True&lt;/property&gt;&lt;property name="Width" type="string" /&gt;&lt;property name="MyProperty" type="string"&gt;Hello SharePoint&lt;/property&gt;&lt;property name="AllowMinimize" type="bool"&gt;True&lt;/property&gt;&lt;property name="AllowConnect" type="bool"&gt;True&lt;/property&gt;&lt;property name="ChromeType" type="chrometype"&gt;None&lt;/property&gt;&lt;property name="TitleIconImageUrl" type="string"&gt;/_layouts/images/BCM_SharePoint_Shoes/wp_SubFooterLinks.gif&lt;/property&gt;&lt;property name="Description" type="string"&gt;Shoes home page links (under the hero image)&lt;/property&gt;&lt;property name="Hidden" type="bool"&gt;False&lt;/property&gt;&lt;property name="TitleUrl" type="string" /&gt;&lt;property name="AllowEdit" type="bool"&gt;True&lt;/property&gt;&lt;property name="Height" type="string" /&gt;&lt;property name="MissingAssembly" type="string"&gt;Cannot import this Web Part.&lt;/property&gt;&lt;property name="HelpUrl" type="string" /&gt;&lt;property name="Title" type="string" /&gt;&lt;property name="CatalogIconImageUrl" type="string"&gt;/_layouts/images/BCM_SharePoint_Shoes/wp_SubFooterLinks.gif&lt;/property&gt;&lt;property name="Direction" type="direction"&gt;NotSet&lt;/property&gt;&lt;property name="ChromeState" type="chromestate"&gt;Normal&lt;/property&gt;&lt;property name="AllowZoneChange" type="bool"&gt;True&lt;/property&gt;&lt;property name="AllowHide" type="bool"&gt;True&lt;/property&gt;&lt;property name="HelpMode" type="helpmode"&gt;Modeless&lt;/property&gt;&lt;property name="ExportMode" type="exportmode"&gt;All&lt;/property&gt;&lt;/properties&gt;&lt;/data&gt;&lt;/webPart&gt;&lt;/webParts&gt; </AllUsersWebPart> <NavBarPage Name="$Resources:core,nav_Home;" ID="1002" Position="Start" /> <NavBarPage Name="$Resources:core,nav_Home;" ID="0" Position="Start" /> </File> </Module> </Modules> LOG OUTPUT: 05/21/2010 12:22:55.11 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services General 72nz Medium Videntityinfo::isFreshToken reported failure. 05/21/2010 12:22:55.19 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services Fields 88yv Medium Creating default lists 05/21/2010 12:22:55.19 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services General 72lp Medium Creating directory Lists 05/21/2010 12:22:55.26 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services Fields 88yf Medium Creating list "Master Page Gallery" in web "http://mmm-dev-ll/sites/Shoes/test" at URL "_catalogs/masterpage", (setuppath: "C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\Template\global\lists\mplib") 05/21/2010 12:22:55.28 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services Fields 88y1 Medium No document templates uploaded for list "Master Page Gallery" -- none found for list template "100". 05/21/2010 12:22:55.28 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services General 72kc Medium Failed to find generic XML file at "C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\Template\xml\onet.xml", falling back to global site definition. 05/21/2010 12:22:56.22 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services Fields 88yz Medium Creating default modules at URL "http://mmm-dev-ll/sites/Shoes/test" 05/21/2010 12:22:56.22 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services General 8e27 Medium Ensuring module folder _catalogs/masterpage 05/21/2010 12:22:56.89 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services General 72h7 Medium Applying template "SubSite#1" to web at URL "http://mmm-dev-ll/sites/Shoes/test". 05/21/2010 12:22:57.09 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services Fields 88yy Medium Activating web-scoped features for template "SubSite#1" at URL "http://mmm-dev-ll/sites/Shoes/test" 05/21/2010 12:22:57.12 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services General 8l1c Medium Preparing 20 features for activation 05/21/2010 12:22:57.14 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services General 8l1d Medium Feature Activation: Batch Activating Features at URL http://mmm-dev-ll/sites/Shoes/test 'AnnouncementsList' (ID: '00bfea71-d1ce-42de-9c63-a44004ce0104'), 'ContactsList' (ID: '00bfea71-7e6d-4186-9ba8-c047ac750105'), 'CustomList' (ID: '00bfea71-de22-43b2-a848-c05709900100'), 'DataSourceLibrary' (ID: '00bfea71-f381-423d-b9d1-da7a54c50110'), 'DiscussionsList' (ID: '00bfea71-6a49-43fa-b535-d15c05500108'), 'DocumentLibrary' (ID: '00bfea71-e717-4e80-aa17-d0c71b360101'), 'EventsList' (ID: '00bfea71-ec85-4903-972d-ebe475780106'), 'GanttTasksList' (ID: '00bfea71-513d-4ca0-96c2-6a47775c0119'), 'GridList' (ID: '00bfea71-3a1d-41d3-a0ee-651d11570120'), 'IssuesList' (ID: '00bfea71-5932-4f9c-ad71-1557e5751100'), 'LinksList' (ID: '00bfea71-2062-426c-90bf-714c59600103'), 'NoCodeWorkflowLibrary' (ID: '00bfe... 05/21/2010 12:22:57.14* w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services General 8l1d Medium ...a71-f600-43f6-a895-40c0de7b0117'), 'PictureLibrary' (ID: '00bfea71-52d4-45b3-b544-b1c71b620109'), 'SurveysList' (ID: '00bfea71-eb8a-40b1-80c7-506be7590102'), 'TasksList' (ID: '00bfea71-a83e-497e-9ba0-7a5c597d0107'), 'WebPageLibrary' (ID: '00bfea71-c796-4402-9f2f-0eb9a6e71b18'), 'workflowProcessList' (ID: '00bfea71-2d77-4a75-9fca-76516689e21a'), 'WorkflowHistoryList' (ID: '00bfea71-4ea5-48d4-a4ad-305cf7030140'), 'XmlFormLibrary' (ID: '00bfea71-1e1d-4562-b56a-f05371bb0115'), 'TeamCollab' (ID: '00bfea71-4ea5-48d4-a4ad-7ea5c011abe5'), . 05/21/2010 12:22:57.15 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services General 8l1f Medium Feature Activation: Batch Activated Features at URL http://mmm-dev-ll/sites/Shoes/test 'AnnouncementsList' (ID: '00bfea71-d1ce-42de-9c63-a44004ce0104'), 'ContactsList' (ID: '00bfea71-7e6d-4186-9ba8-c047ac750105'), 'CustomList' (ID: '00bfea71-de22-43b2-a848-c05709900100'), 'DataSourceLibrary' (ID: '00bfea71-f381-423d-b9d1-da7a54c50110'), 'DiscussionsList' (ID: '00bfea71-6a49-43fa-b535-d15c05500108'), 'DocumentLibrary' (ID: '00bfea71-e717-4e80-aa17-d0c71b360101'), 'EventsList' (ID: '00bfea71-ec85-4903-972d-ebe475780106'), 'GanttTasksList' (ID: '00bfea71-513d-4ca0-96c2-6a47775c0119'), 'GridList' (ID: '00bfea71-3a1d-41d3-a0ee-651d11570120'), 'IssuesList' (ID: '00bfea71-5932-4f9c-ad71-1557e5751100'), 'LinksList' (ID: '00bfea71-2062-426c-90bf-714c59600103'), 'NoCodeWorkflowLibrary' (ID: '00bfea... 05/21/2010 12:22:57.15* w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services General 8l1f Medium ...71-f600-43f6-a895-40c0de7b0117'), 'PictureLibrary' (ID: '00bfea71-52d4-45b3-b544-b1c71b620109'), 'SurveysList' (ID: '00bfea71-eb8a-40b1-80c7-506be7590102'), 'TasksList' (ID: '00bfea71-a83e-497e-9ba0-7a5c597d0107'), 'WebPageLibrary' (ID: '00bfea71-c796-4402-9f2f-0eb9a6e71b18'), 'workflowProcessList' (ID: '00bfea71-2d77-4a75-9fca-76516689e21a'), 'WorkflowHistoryList' (ID: '00bfea71-4ea5-48d4-a4ad-305cf7030140'), 'XmlFormLibrary' (ID: '00bfea71-1e1d-4562-b56a-f05371bb0115'), 'TeamCollab' (ID: '00bfea71-4ea5-48d4-a4ad-7ea5c011abe5'), . 05/21/2010 12:22:57.15 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services General 88jb Medium Feature Activation: Activating Feature 'RadEditorFeatureRichText' (ID: '747755cd-d060-4663-961c-9b0cc43724e9') at URL http://mmm-dev-ll/sites/Shoes/test. 05/21/2010 12:22:57.15 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services General 75fb Medium Calling 'FeatureActivated' method of SPFeatureReceiver for Feature 'RadEditorFeatureRichText' (ID: '747755cd-d060-4663-961c-9b0cc43724e9'). 05/21/2010 12:22:57.20 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services General 75f8 Medium Feature Activation: Feature 'RadEditorFeatureRichText' (ID: '747755cd-d060-4663-961c-9b0cc43724e9') was activated at URL http://mmm-dev-ll/sites/Shoes/test. 05/21/2010 12:22:57.51 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services Fields 88yv Medium Creating default lists 05/21/2010 12:22:57.51 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services General 72lp Medium Creating directory Lists 05/21/2010 12:22:57.51 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services Fields 88yz Medium Creating default modules at URL "http://mmm-dev-ll/sites/Shoes/test" 05/21/2010 12:22:57.51 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services General 8e27 Medium Ensuring module folder _catalogs/masterpage 05/21/2010 12:22:57.56 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services General 72ix Medium Not enough information to determine a list for module "Default". Assuming no list for this module. 05/21/2010 12:22:57.87 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services General 72h8 Medium Successfully applied template "SubSite#1" to web at URL "http://mmm-dev-ll/sites/Shoes/test". 05/21/2010 12:22:59.48 w3wp.exe (0x1E40) 0x0980 Windows SharePoint Services General 8e2s Medium Unknown SPRequest error occurred. More information: 0x80070057 05/21/2010 12:23:07.06 OWSTIMER.EXE (0x0884) 0x106C Office Server Setup and Upgrade 8u3j High Registry key value {SearchThrottled} was not found under registry hive {Software\Microsoft\Office Server\12.0}. Assuming search sku is not throttled. 05/21/2010 12:23:07.08 OWSTIMER.EXE (0x0884) 0x106C Search Server Common MS Search Administration 90gf Medium SQL: dbo.proc_MSS_PropagationGetQueryServers 05/21/2010 12:23:07.09 OWSTIMER.EXE (0x0884) 0x106C Search Server Common MS Search Administration 8wni High Resuming default catalog with reason 'GPR_PROPAGATION' for application 'SharedServices1'... 05/21/2010 12:23:07.11 OWSTIMER.EXE (0x0884) 0x106C Search Server Common MS Search Administration 8wnj High Resuming anchor text catalog with reason GPR_PROPAGATION' for application 'SharedServices1'... 05/21/2010 12:23:07.14 OWSTIMER.EXE (0x0884) 0x106C Search Server Common MS Search Administration 8dvl Medium Search application '3c6751cc-37b0-470a-bfa2-bfd0b5635fe1': Provision start addresses in default content source. 05/21/2010 12:23:07.15 OWSTIMER.EXE (0x0884) 0x106C Search Server Common MS Search Administration 7hmh High exception in SearchUpgradeProvisioner Keyword Config System.InvalidOperationException: jobServerSearchServiceInstance is null at Microsoft.Office.Server.Search.Administration.SearchUpgradeProvisioner..ctor(SearchServiceInstance searchServiceInstance) at Microsoft.Office.Server.Search.Administration.OSSPrimaryGathererProject.ProvisionContentSources() 05/21/2010 12:23:29.19 OWSTIMER.EXE (0x0884) 0x0FFC SharePoint Portal Server Business Data 79bv High Initiating BDC Cache Invalidation Check in AppDomain 'DefaultDomain' 05/21/2010 12:23:29.19 OWSTIMER.EXE (0x0884) 0x0FFC SharePoint Portal Server Business Data 79bx High Completed BDC Cache Invalidation Check in AppDomain 'DefaultDomain' 05/21/2010 12:23:45.84 OWSTIMER.EXE (0x0884) 0x08A8 SharePoint Portal Server SSO 8inc Medium In SSOService::Synch(), sso database conn string: 05/21/2010 12:23:50.80 OWSTIMER.EXE (0x0884) 0x0F14 Excel Services Excel Services Administration 8tqi Medium ExcelServerSharedWebApplication.Synchronize: Starting synchronize for instance of Excel Services in SSP 'SharedServices1'. 05/21/2010 12:23:50.80 OWSTIMER.EXE (0x0884) 0x0F14 Excel Services Excel Services Administration 8tqj Medium ExcelServerSharedWebApplication.Synchronize: Successfully synchronized instance of Excel Services in SSP 'SharedServices1'. 05/21/2010 12:23:52.31 w3wp.exe (0x1E40) 0x0980 SharePoint Portal Server Runtime 8gp7 Medium Topology cache updated. (AppDomain: /LM/W3SVC/1963195510/Root-1-129188762904047141)

    Read the article

< Previous Page | 400 401 402 403 404 405 406 407 408 409 410 411  | Next Page >