Search Results

Search found 10297 results on 412 pages for 'real steel4819'.

Page 302/412 | < Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >

  • pdns-recursor allocates resources to non-existing queries

    - by azzid
    I've got a lab-server running pdns-recursor. I set it up to experiment with rate limiting, so it has been resolving requests openly from the whole internet for weeks. My idea was that sooner or later it would get abused, giving me a real user case to experiment with. To keep track of the usage I set up nagios to monitor the number of concurrent-queries to the server. Today I got notice from nagios that my specified limit had been reached. I logged in to start trimming away the malicious questions I was expecting, however, when I started looking at it I couldn't see the expected traffic. What I found is that even though I have over 20 concurrent-queries registered by the server I see no requests in the logs. The following command describes the situation well: $ sudo rec_control get concurrent-queries; sudo rec_control top-remotes 22 Over last 0 queries: How can there be 22 concurrent-queries when the server has 0 queries registered? EDIT: Figured it out! To get top-remotes working I needed to set ################################# # remotes-ringbuffer-entries maximum number of packets to store statistics for # remotes-ringbuffer-entries=100000 It defaults to 0 storing no information to base top-remotes statistics on.

    Read the article

  • Steps to debugging web site latency and timeout issues

    - by Paperjam
    I have a client who has multiple offices around the country, all of which share the same Internet connection via their WAN. One specific office for this client is experiencing severe latency and timeout issues with my web site. Most, but not all, of the latency occurs on a specific ASPX page where multiple postbacks are made while populating cascading dropdown lists (rapid form submits). The latency is sporadic and can be anywhere from a few seconds to a full timeout. There is no indication that the timeouts are occurring on the server's end. The IT guy for this client is having trouble narrowing down the problem. Since it is affecting only one location for one client, I am led to believe it is not something with my site but something specific to that location. He's measured ping times while using the site and has noticed no real variance in ping times even when the page has timed out. I believe this may be being caused by some sort of Internet filter that doesn't like rapid form submits, but beyond a hunch I haven't a clue. My question is what things should I tell the IT guy to look for? While I'm not trying to provide active tech support for this issue, I would at least like to glean an understanding of what is going on and try to offer some sort of advice. Thank you.

    Read the article

  • I can't connect to internet via lan cable because 2032 battery died and my bios bios info is now empty [closed]

    - by Rand Om Guy
    I have a compaq CQ61-112SL from about 5 years now... the main battery is almost dead, doesn't keep more then 10 minutes. anyway my problem is that my motherboard battery didn't have any more energy left a few days ago and since then I can't access internet through lan cable but only via wifi. I need cable though. I saw that on my BIOS setup page there were a bunch of parameters missing like serial number, UUID, product number and stuff like that. Also when I start the notebook it prints something like : No serial found. or something like that. I don't really know if the reason why my lan cable doesn't work is the empty BIOS but i assume that's it. If it's not please enlighten me. Or anyway tell me how to update the serial number and product number to the real ones (instead of the 0000000000000 that is now in my bios). I downloaded HP DMI which should make it possible to set these variables on the BIOS but i'm on Windows 8 64bit and the executable file that I need to open for my laptop model says it can't run on 64 bit.

    Read the article

  • Linux CentOS strange memory readings

    - by user2008937
    I am actually a young junior sys admin. I have a question - i am trying to understand how linux deals with memory... while playing around different monitoring programs I found some strange thing. When I run top on my laptop it shows me that FIREFOX process with pid 8778 takes 18,3% of memory (%MEM column). grep "MemTotal" /proc/meminfo Above command give me 1848336kb/1024 = 1805megs of memory (its ok - i have 2 gigs of ram). So if the firefox process takes 18,3% of MEM(according to tops %MEM column) then it takes 0.183 * 1805 which is approximately 325mb of memory. Quite a lot as for firefox... But well, in Linux there are lots of shared libraries that programs commonly uses (like famous libc). And those libraries are added to memory utilization of every process that uses it in the system, despite they are actually reading same file(single object in memory). So top may show too big mem utilization because of those shared libraries. Well, it is time to use PMAP which should show us the real mem utilization of process. But.. pmap -d $(pidof firefox) mapped: 983460K writeable/private: 757164K shared: 66416K so pmap shows that 983460/1024=993MB of memory is mapped to this process. It is in fact much bigger than mem utilization showed by top. Whats wrong here? How pmap can show more than top? even when top adds also the shared libraries (which in fact are single objects in memory) for each process that uses it? and pmap omits it? Regards Krzysztof

    Read the article

  • bond0:0 + define virtual IP

    - by yael
    in my Linux server I have the following: Linux Version - RedHat-Linux- 5.3.0.0 (this Linux server only only one LAN) more /etc/sysconfig/network-scripts/ifcfg-bond0:0 DEVICE=bond0:0 ONBOOT=yes BOOTPROTO=static IPADDR=10.10.10.12 NETMASK=255.255.255.0 ifconfig -a bond0 Link encap:Ethernet HWaddr 00:00:00:00:00:00 UP BROADCAST MASTER MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) bond0:0 Link encap:Ethernet HWaddr 00:00:00:00:00:00 inet addr:10.10.10.12 Bcast:1.1.1.255 Mask:255.255.255.0 UP BROADCAST MASTER MULTICAST MTU:1500 Metric:1 eth0 Link encap:Ethernet HWaddr 00:0E:0C:C7:F8:92 inet addr:1.1.1.1 Bcast:1.1.1.255 Mask:255.255.255.0 inet6 addr: fe80::20e:cff:fec7:f892/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:8600 errors:0 dropped:0 overruns:0 frame:0 TX packets:4764 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:717979 (701.1 KiB) TX bytes:598620 (584.5 KiB) Memory:b8820000-b8840000 my problems: why I get HWaddr 00:00:00:00:00:00 and not the real MAC address I cant ping to other server with 10.10.10.11 from my server is it possible to define bond0:0 when I have only one LAN (eth0) other info: more /etc/modprobe.conf alias eth0 e1000e alias eth1 e1000e alias eth2 e1000e alias eth3 e1000e alias scsi_hostadapter mptbase alias scsi_hostadapter1 mptsas alias scsi_hostadapter2 ata_piix alias bond0 bonding alias bond1 bonding

    Read the article

  • Loadbalancing with nginx and tomcat

    - by London
    Hello this should be fairly easy to answer for any system admin, the problem is that I'm not server admin but I have to complete this task, I'm very close but still not managing to do it. Here is what I mean, I have two tomcat instance running on machine1 and machine2. People usually access those by visiting urls : http://machine1:8080/appName http://machine2:9090/appName The problem is when I setup nginx with domain name i.e domain.com, nginx sends requests to http://machine1:8080/ and http://machine2:9090/ instead of http://machine1:8080/ and http://machine2:9090/appName Here is my configuration (very basic as it can be noted) : upstream backend { server machine1:8080; server machine2:9090; } server { listen 80; server_name www.mydomain.com mydomain.com; location / { # needed to forward user's IP address to rails proxy_set_header X-Real-IP $remote_addr; # needed for HTTPS proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_max_temp_file_size 0; proxy_pass http://backend; } #end location } #end server What changes must I do to do the following : - when user visits mydomain.com - transfer him to either machine1:8080/appName or machine2:9090 Thank you

    Read the article

  • cset shield --kthread on: should I use this?

    - by lori
    I'm reading up on cpu shielding using Alex Tsariounov's cset utility here: https://rt.wiki.kernel.org/index.php/Cpuset_Management_Utility/tutorial In the tutorial I'm finding the wording around migrating kernel threads from having access to all cpus to running only in a certain cpuset a bit ambiguous The tutorial says the following: Some kernel threads can be moved into the unshielded system cpuset as well. These are the threads that are not bound to specific CPUs. If a kernel thread is bound to a specific CPU, then it is generally not a good idea to move that thread to the system set because at worst it may hang the system and at best it will slow the system down significantly. These threads are usually the IRQ threads on a real time Linux kernel, for example, and you may want to not move these kernel threads into system. If you leave them in the root cpuset, then they will have access to all CPUs. The tutorial then goes on to say: However, if your application demands an even "quieter" shield, then you can move all movable kernel threads into the unshielded system set with the following command. [zuul:cpuset-trunk]# cset shield -k on cset: --> activating kthread shielding cset: kthread shield activated, moving 70 tasks into system cpuset... [==================================================]% cset: done I am confused by this final sentence. By using the word however, it seems to suggest that you typically should not move the movable kernel threads into the unshielded system set. Is this the case, or is it safe to move kernel threads which can be moved into a cpuset, thereby preventing them from running on some cpus?

    Read the article

  • Xen or KVM? Please help me decide and implement the one which is better

    - by JohnAdams
    I have been doing research for implementing virtualization for a server running 3 guests - two linux based and one windows. After trying my hands on Xenserver, I am impressed with the architecture and wanted to use the opensource XEN, which is when I am hearing a lot more about KVM, about how good it is and it's the future etc. So, could anyone here please help me answer some of my queries, between KVM and XEN. Based on my requirement of three VMs on one server, which is better for performance - KVM or XEN, considering one the linux vm's will works a file-server, one as a mailserver and the third one a Windows server? Is KVM stable? What about upgrades.. What about XEN, I cannot find support for it Ubuntu? Are there any published benchmarks on both Xen and KVM? I cannot seem to find any. If I go with Xen, will it possible to move to KVM later or vice versa? In summary, I am looking for real answers on which one I should use.. Xen or KVM?

    Read the article

  • Doesn't work Nginx + SSI

    - by boopidoopi
    I have some problems. Nginx doesn't work with SSI. Nginx listens 80 port (frontend), apache2 listens 81 port (backend). That is my nginx configurations: server { listen 80; server_name test.dev www.test.dev; error_log /var/log/nginx/error.log debug; log_subrequest on; location / { ssi on; proxy_pass http://localhost:81; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 15m; client_body_buffer_size 128k; } } SSI include in test.dev index.php: <!--# include virtual="http:test.dev/test.html" -- When I open test.dev/index.php I see clean page. In page source: <!--# include virtual="http:test.dev/test.html" -- So how to enable SSI in nginx? Can you help me?

    Read the article

  • Scheduled Task unable to create/update any files

    - by East of Nowhere
    I have several tasks in Task Scheduler in Windows Server 2008 SP2 (32-bit) and they all successfully "do their work", except for creating or updating any files on Windows. All the tasks point to simple .cmd files that have the real work but beyond that there's no pattern: some call robocopy with the /LOG option, some call .exe files I wrote that manipulate XML files, some just do stuff with > redirection. With all of them, if I double-click the .cmd file myself, it works fine and the files are created or updated or whatever. If I run it from Task Scheduler (by the schedule or just clicking Run), the task always completes "successfully" but without any of the desired changes to files. I don't see any "unable to create file" errors in Event Viewer either. The tasks do all Run As a specific account, but I have logged in as that account and verified that it has permissions to do everything it needs to. Further details -- Task is set to Run whether user is logged in or not. Configured for: "Windows Vista or Windows Server 2008", there is no other Configured for option available.

    Read the article

  • How to Monitor Network in Medium-Sized Company?

    - by Kyle Lowry
    I work at a medium sized company (100+ employees). An issue that has been cropping up is network performance, internet access in particular. We have about 70 or more computers, a mix of Mac OS X and Windows XP & 7 machines. We have several servers (Exchange server, PC file servers, MS SQL, Blackberry, FTP, Mac server, etc). There are four main switches, a SonicWall firewall, and probably a couple routers in the server room with a dozen or so more scattered around the building. The network structure has grown organically over a number of years; and, as far as I know, there really isn't a monitoring solution in place. When we experience network issues (slow connections, dropped packets, and so on), our general solution is to power cycle some hardware or go around to each employee and ask them if they are uploading/downloading any large files. This is really inefficient and time consuming, and it does not allow us to monitor the network, tackling potential problems proactively. I would like to find a solution that would allow me to monitor network usage company-wide in real time, with detail going down to the individual computer, ideally. Given the hodgepodge of equipment and operating systems, what would be the best way to set up some kind of monitoring solution? Hardware, software, restructuring our network architecture?

    Read the article

  • Vagrant VM Fails to Boot

    - by Rob Wilkerson
    I have a Vagrant environment that requires me to forward port 80 so I bring it up under sudo on an OS X machine. This has always been fine until I recently upgraded to Vagrant 1.2.2. Now it fails to boot. [default] Waiting for VM to boot. This can take a few minutes. [default] Failed to connect to VM! Failed to connect to VM via SSH. Please verify the VM successfully booted by looking at the VirtualBox GUI. Because I'm running under sudo, the machine never gets added to the Virtualbox GUI, but that's always been the case for this environment. I don't get any indication that there was a problem with the additions -- a potential source of this error, from what I've read. I can bring things up just fine if I change to using port 8080 on the host machine. I can't use the app, but the VM itself loads up and provisions nicely. As far as I can tell, the only thing that's changed is: I upgraded my Vagrant version I updated the project's Vagrantfile to use v2 syntax. Anyone have any idea what I might be missing? I thought I'd be able to find this pretty easily, but it's quickly becoming a very real problem.

    Read the article

  • Website hosted on my virtualbox web server not displaying images or applying css when viewed through phone

    - by WebweaverD
    I would really appreciate it if someone could help me. Please let me know if you need more info in the comments. My Set Up I have a windows 7 pc. On it I run a virtual box VM with a ubuntu 12 guest os and LAMP setup. I share files between the two machines using samba from linux to windows and using windows file sharing (Workgroup) the other way round. The vm is set up with a bridged network adapter and can happily serve web pages to my host machine. I use DHCP reservations on my home wireless router/modem to reserve an ip for the vm and give it a sitename.dev in my windows host file so I can access it at sitename.dev through the browser. The Problem So far so good but I have a dev project which needs a lot of mobile template development, now obviously I can use a browser plugin to simulate a mobile device but I would like to be able to see the real thing easily on my phone during development. So ideally I would like a similar setup on my iphone to my windows setup Now I'm not great on networking and dont have much experience with web server set up. So when I typed the ip of my virtual box into my iphone i wasnt expecting to see anything. I was pleasantly surprised when my site loaded up. The javascript even seems to be running but the images and css are not happening. My Question 1) What is happening here, is it something to do with the bridged set up on the vm network? 2)How do I make the sites load properly through my phone Notes I've also tried another phone. The same sites viewed on live servers work fine.

    Read the article

  • BackupExec 12 + RALUS - VERY slow backups

    - by LVDave
    We use Backup Exec 12 and the Remote Agent for Linux/Unix Servers (RALUS) to backup a large RHEL5 system. For various reasons we need to do a daily working set job. These working-set jobs run abysmally slow. The link between the target machine and the BE server is gigabit, and any other type of job runs 1-3GB/min. These working-set jobs start out at perhaps 40MB/min and over the course of the backup job slowly drops down so low that the BE job rate display in the "current jobs" goes blank.. Since we usually are only doing changed-files for one day, the job is usually small and finishes overnight and we don't worry abotu the slowness, but we had some issues with the backup server, and missed about 6 days of fairly heavy work on the Linux box, so this working-set job will be a doozy.. We have support with Symantec, and I've pestered them a lot about this, they've had me run RALUS in debug mode, sent them that log and a VXgather from the BE host and they had no fix/workaround.. To give an idea, I have the mentioned working-set job running for the last 3 1/2 hours and it's backed up just under 10MEGAbytes.... I'm posting this here to see if anybody in the "real world" has seen this/and/or has any ideas what might be causing these abysmally slow jobs, since Symantec seems to be clueless...

    Read the article

  • Macvlan based interface pings from host but not from namespace

    - by jtlebi
    My setup: Private network vboxnet1 10.0.7.0/24 1 Host, ubuntu desktop 1 VM, ubuntu server (VirtualBox) Adressing layout: HOST: 10.0.7.1 VM: 10.0.7.101 VM MAC NAMESPACE: 10.0.7.102 On the VM, I ran the following commands: ip netns add mac # create a new nmespace ip link add link eth0 mac0 type macvlan # create a new macvlan interface ip link set mac0 netns mac On the mac namespace, inside the VM: ip link set lo up ip link set mac up ip addr add 10.0.7.102/24 dev mac0 So that we basically end up with: (Like Inception ?) +------------------------+ | Host: 10.0.7.1 | | | | +--------------------+ | | | VM: 10.0.7.101 | | | | | | | | +----------------+ | | | | | NS: 10.0.7.102 | | | | | | | | | | | +----------------+ | | | +--------------------+ | +------------------------+ What works: Ping between Host and VM Ping between NS and NS dhclient from NS What does not work: ping between NS and VM ping between NS and Host Where I started to go nuts: tcpdump on host (the real machine) actually shows ARP request AND replies tcpdump on NS shows ARP requests sent to the host tcpdump on VM makes the whole mess work (!) -- ping starts to get answers when tcpdump is started on the VM ?!? So, I bet you were eager for it, my question is: how to I make it work ? I suspect something's wrong with ARP on the macvlan inside the NS but can't figure out what exactly... Btw, I did the same expérimentations with the mac0 interface directly on the VM (no namespace) and it worked flawlessly.

    Read the article

  • Configuring nginx to check for hard files in only a few directories,

    - by Evan Carroll
    For a node.js project I'm doing, I have a tree like this. +-- public ¦   +-- components ¦   +-- css ¦   +-- img +-- routes +-- views Essentially, I have the root to be set to public. I want all requests destined to /components/ /css/ /img/ To check to see if their appropriate destinations exist on disk. However, I don't want requests to other directories to even run an IO operation, /foo/asdf /bar /baz/index.html None of those should result in the disk being touched. I have a stansa that does the proxy to node.js, location @proxy { internal; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-NginX-Proxy true; proxy_pass http://localhost:3030; proxy_redirect off; } I just would like to know how to arrange this. My problem would be easily solved if try_files took a single argument, but it always wants a file first. location /components/ { try_files $uri, @proxy } location /css/ { try_files $uri, @proxy } location /img/ { try_files $uri, @proxy } However, there is nothing that I can find that will give me, location / { try_files @proxy } How do I get the effect I want?

    Read the article

  • INACCESSIBLE_BOOT_DEVICE after installing Linux on same drive

    - by kdgregory
    History: My PC was configured with two drives: an 80G on IDE 0 Primary that was running Win2K, and a 320G on IDE 0 Secondary that was running Linux (Ubuntu). I decided to pull the 80Gb drive out of the system, so dd'd the entire 80 G drive (/dev/sda) onto the 320 (/dev/sdb) -- this included the MBR and partition table. Then I pulled the drive, plugged the 320 into IDE 0 Primary, and rebooted. The Windows partition worked at this point. Then I installed Ubuntu into the remaining space on the 320. It works. However, when I try to boot into Windows, I get a BSOD with the following message: *** STOP: 0x0000007B (0x89055030,0xC000014F,0x00000000,0x00000000) INACCESSILE_BOOT_DEVICE Before the BSOD I see the Win2K splash screen, and it claims to be "starting windows" for a couple of seconds -- so it appears that the first stage boot loader is working as expected. Ditto when I try booting in Safe Mode. After reading the Microsoft KB article, I booted into the recovery console and tried running chkdsk /r. It refused to run, claiming that the drive was corrupted (sorry, didn't write down the exact error message). However, I can mount the drive from Linux, and access all files. And for what it's worth, I can scan the drive using the Linux "Disk Utility" (this is Ubuntu, the menus don't show real program names), it claims the drive to be clean. The KB article mentioned that boot.ini could be the problem, so here it is: timeout=10 default=multi(0)disk(0)rdisk(0)partition(1)\WINDOWS [operating systems] multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Microsoft Windows 2000 Professional" /fastdetect Any pointers on what to do next?

    Read the article

  • As an admin, what tools do you use to log what you do to your boxes?

    - by Jerry
    I am more of a linux applications developer than an admin. Over time, I've built servers and maintained them, sometimes to offer services, mostly just to develop the applications I work on. Way back when I would create a file in my account to keep notes on what I did on each machine, so that I could replicate that when I migrated to other machines. Nowadays, I install something a private trac installation, install it's blog plugin, and then use that to make notes of everything I install, and most commands that I run, as well as the output. This provides me a combination wiki and blog that I find very useful as a "captain's log". I do this mostly so that when I migrate to a new clean machine, I have a much easier time in bringing it up. And yet, I am always amazed when I see others just install this, delete that, run this, setup this config, ... without seeming to use any way to actually note what they are doing. What do YOU do, and what tools are available? I am especially interested in the transition between maintaining a few machines for a few people and maintaining several to dozens of machines providing a real service. What are the best practices, and where can I find good resources? Thanks!

    Read the article

  • Linux: Encryption of a physical LVM volume doesn't imply encryption of its logical subvolumes?

    - by java.is.for.desktop
    Hello, everyone! I installed OpenSuse one year ago on my notebook. I created all partitions except /boot inside an LVM partition. I enabled encryption for it during setup. The system asked me a password on each boot later. Everything seemed fine... But one day I wanted to cancel the boot process and did it with SysRq REISUB. During entering this combination, the system suddenly continued to boot without any password being entered. I had no /home and no swap, but / was mounted! I checked multiple times, it was inside an "encrypted" physical LVM volume. Later I found out that OpenSuse can't encrypt / at all. There is an option to enable encryption for each logical volume, and indeed it fails for /. Later I tried Fedora. The options during partitioning were misleading by same means. I could enable "encryption" of a physical volume and each logical subvolume. With the exception that Fedora actually allowed to encrypt /. Question: What's the point of setting up "encryption" for a physical LVM volume, when it doesn't imply (real) encryption of its logical subvolumes? Did I get something wrong in this whole concept?

    Read the article

  • iptables: How to combine DNAT and SNAT to use a secondary IP address?

    - by Que_273
    There are lots of questions on here about iptables DNAT/SNAT setups but I haven't found one that solves my current problem. I have services bound to the IP address of eth0 (e.g. 192.168.0.20) and I also have a IP address on eth0:0 (192.168.0.40) which is shared with another server. Only one server is active, so this alias interface comes and goes depending on which server is active. In order to get traffic accepted by the service a DNAT rule is used to change the destination IP. iptables -t nat -A PREROUTING -d 192.168.0.40 -p udp --dport 7100 -j DNAT --to-destination 192.168.0.20 I also wish all outbound traffic from this service to appear to come from the shared IP, so that return responses will work in the event of a active-standby failover. iptables -t nat -A POSTROUTING -p udp --sport 7100 -j SNAT --to-source 192.168.0.40 My problem is that the SNAT rule is not always run. Inbound traffic causes a connection tracking entry like this. [root]# conntrack -L -p udp udp 17 170 src=192.168.0.185 dst=192.168.0.40 sport=7100 dport=7100 src=192.168.0.20 dst=192.168.0.185 sport=7100 dport=7100 [ASSURED] mark=0 secmark=0 use=2 which means the POSTROUTING chain is not run and outbound traffic leaves with the real IP address as the source. I am thinking I can set up a NOTRACK rule in the raw table to prevent conntracking for this port number, but is there a better or more efficient way to make this work? Edit - Alternative question: Is there a way (in CentOS/Linux) to have an interface that can be bound to but not used, such that it can be attached to the network or detached when a shared IP address is swapped between servers?

    Read the article

  • Copying files to my laptop makes them locked

    - by John
    When I save files from e.g. remote desktop or from an email (outlook) attachments, or from skype even to my local machine they show a locked Icon on the file. Then e.g. SQL Server doesn't let me restore backups as it says the operating system doesn't have access to the file. I've had success fixing this by setting the ownership of the parent folder to my user and then let it apply to sub folders. Also sometimes I need to click - Proerties - Security - Advanced - Change Permmissions, then check "change child permissions..." and apply on the parent dir. I'm using Windows 7 64 bit Proffessional, on HP Probook 4530, and I have a administrator user. This is a real pain to do everytime. I suspect it might be because of HP software that came with the laptop, I think there is drive encryption as part of the protect tools. Although I'm hoping there's something in windows i can set to change the behaviour to not lock these files.

    Read the article

  • DFS Root namespace is RDWR for all users

    - by Patrick
    We have an existing DFS Replication and Namespace group that we use to serve the company's files. This has been operating fine for us for some time now, and continues to do so. however a situation arose yesterday afternoon that has led us to be stumped. The problem is that we have our name space presented as : \\domain.co.uk\public\[8 or 9 folders that are mapped to the users in the business] We had a problem this morning that meant that a number of users started mapping their AD Home Drive directly to the \\domain.co.uk\public directory and we found that they had read/write. This rapidly became a problem as a at least one director saved some moderately sensitive documents in there and basically anyone could read them. I've tidied up that specific problem with some deft scripting and a slight modification of group policy. However I would like to make \public read only, the trouble is I can't work out where the ACLs for that folder would be held. All the folders that are presented as \\domain.co.uk\public\[folder] are 'real' folders on logical volumes on our DFS servers so are secured with groups that are applied via the 'security' tab. I'd like to do the same on \public but I can't find it. I have looked through amongst other things \Sysvol\domain.co.uk but can't find it and after a lot of clicking and a bit of reading I can't see how to lock it down. Any thoughts?

    Read the article

  • Log connections to program

    - by Zac
    Besides for using iptables to log incoming connections.. Is there a way to log established inbound connections to a service that you don't have the source to (suppose the service doesn't log stuff like this on its own)? What I'm wanting to do is gather some information based on who's connecting to be able to tell things like what times of the day the service is being used the most, where in the world the main user base is, etc. I am aware I can use netstat and just hook it up to a cron script, but that might not be accurate, since the script could only run as frequently as a minute. Here is what I am thinking right now: Write a program that constantly polls netstat, looking for established connections that didn't appear in the previous poll. This idea seems like such a waste of cpu time though, since there may not be a new connection.. Write a wrapper program that accepts inbound connections on whatever port the service runs on, but then I wouldn't know how to pass that connection along to the real service. Edit: Just occurred to me that this question might be better for stackoverflow, though I am not certain. Sorry if this is the wrong place.

    Read the article

  • VOIP and internet connection speeds [cable vs. fiber]

    - by microchasm
    Our office is migrating to IP telephony. We have less than 10 employees that will be using the phones. We currently have cable internet, and they just bumped the speeds: There is a data center that was just recently built in our building, and we were considering co-lo'ing there in the near future. As a result, they offered us access to their triple-redundant internet, but it's quite expensive. They are offering 3mbps committed with up to 10mbps burst for $250/month (discounted). We pay ~$120 for our cable (which the plan was to keep--at least for TV). I want the phone system and LAN to be as separate as possible. Was thinking about keeping the cable for LAN, and using the other connection for the phones (until I saw the price). Now I'm thinking it might make sense to add on to our existing cable setup, and change our phone to only have DSL as a backup for the cable. Is there any real benefit to the fiber? Especially for the price? Any other suggestions or ideas? Thanks.

    Read the article

  • Application that will identify percentage of your system disk bandwidth used on a user-application by user-application basis?

    - by Warren P
    I always (subjectively) feel my computer is far too slow (however fast it is), and so I'm always looking for ways to measure and understand what my computer is actually doing, that is making it seem "slow" to me. It has been my observation that my software-developer workload is most often disk-bound (I am waiting for Disk I/O) more than CPU bound. What has made it worse, is that I am using a corporate PC that has in-memory active-scanning anti-virus software that I do not have control over, and also some IT department mandated services that seem to suck up a lot of available hard-disk bandwidth. The best tool I have seen (in Windows 7) is the Resource Monitor which I usually acess from the button in the task Manager. The disk IO page, however, seems to label Disk Activity at a very low level (for example, showing the Volume Shadow Storage, which is flushing information obviously written by something ELSE other than VSS itself, and then writes to Pagefile.sys, which are obviously due to Virtual Memory faults in some application). What I would like to know is if a utility exists that can add up all direct disk input and output by user-level process, or find the process or service that caused VM or VSS activity. In that way, I hope, you could establish a real idea of how much of your computer's precious disk subsystem bandwidth is attributable to a particular application. here's a scenario: MyApp.exe writes 100k/s and reads 100k/s directly. VSS ends up writing another 100k/s. pagefaults caused inside MyApp.exe cause another 100k/s of writes. So the total "cost" of MyApp.exe running, during a period of time (let's say 1 second) is 400k/s, whereas you can only directly observe half of that, in Resource Monitor. Is there a smarter disk-IO watching piece of software I can use?

    Read the article

< Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >