Search Results

Search found 11930 results on 478 pages for 'shared machines'.

Page 118/478 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • PHP: Extensionless URLs in IIS7 (windows)? (for wordpress)

    - by smithym
    Hi there, I have recently installed wordpress but i would like to configure extensionless URLs .. I am using IIS7 but on a shared server. I presume i cna add something to web.config file?? I am little bit confused, in IIS7 and asp.net mvc it is done via code... but in PHP i don't think it is .... so the only alternative is to use a re-write module but i can't as I am on a shared server and can't install ISAPI stuff.. so I was wondering if there is a way to do the mapping i.e. when going to testme it would actually load testme.php Any advise really appreciated Thanks

    Read the article

  • VMWare Headphone Issue

    - by Ash_Geek_Pickney
    I am running VMWare Workstation Version 7 on a Windows 7 partition on a Macbook Pro. I have built a virtual XP machine within vmware to run some old music programs. The sound from my virtual machines comes out of my macbook speakers but the volume is not controllable using my macbook volume controls, only by using the volume control on the XP machine. My problem is that i need to use headphones, and when i plug a headphone jack into my macbook, the sound from my vmware virtual machines still comes out of the macbook speakers. I am baffled. Please help. Thanks in advance...

    Read the article

  • In Ruby, how to I read memory values from an external process?

    - by grg-n-sox
    So all I simply want to do is make a Ruby program that reads some values from known memory address in another process's virtual memory. Through my research and basic knowledge of hex editing a running process's x86 assembly in memory, I have found the base address and offsets for the values in memory I want. I do not want to change them; I just want to read them. I asked a developer of a memory editor how to approach this abstract of language and assuming a Windows platform. He told me the Win32API calls for OpenProcess, CreateProcess, ReadProcessMemory, and WriteProcessMemory were the way to go using either C or C++. I think that the way to go would be just using the Win32API class and mapping two instances of it; One for either OpenProcess or CreateProcess, depending on if the user already has th process running or not, and another instance will be mapped to ReadProcessMemory. I probably still need to find the function for getting the list of running processes so I know which running process is the one I want if it is running already. This would take some work to put all together, but I am figuring it wouldn't be too bad to code up. It is just a new area of programming for me since I have never worked this low level from a high level language (well, higher level than C anyways). I am just wondering of the ways to approach this. I could just use a bunch or Win32API calls, but that means having to deal with a bunch of string and array pack and unpacking that is system dependant I want to eventually make this work cross-platform since the process I am reading from is produced from an executable that has multiple platform builds, (I know the memory address changes from system to system. The idea is to have a flat file that contains all memory mappings so the Ruby program can just match the current platform environment to the matching memory mapping.) but from the looks of things I'll just have to make a class that wraps whatever is the current platform's system shared library memory related function calls. For all I know, there could already exist a Ruby gem that takes care of all of this for me that I am just not finding. I could also possibly try editing the executables for each build to make it so whenever the memory values I want to read from are written to by the process, it also writes a copy of the new value to a space in shared memory that I somehow have Ruby make an instance of a class that is a pointer under the hood to that shared memory address and somehow signal to the Ruby program that the value was updated and should be reloaded. Basically a interrupt based system would be nice, but since the purpose of reading these values is just to send to a scoreboard broadcasted from a central server, I could just stick to a polling based system that sends updates at fixed time intervals. I also could just abandon Ruby altogether and go for C or C++ but I do not know those nearly as well. I actually know more x86 than C++ and I only know C as far as system independent ANSI C and have never dealt with shared system libraries before. So is there a gem or lesser known module available that has already done this? If not, then any additional information as to how to accomplish this would be nice. I guess, long story short, how do I do all this? Thanks in advance, Grg PS: Also a confirmation that those Win32API calls should be aimed at the kernel32.dll library would be nice.

    Read the article

  • Monitoring CPU Usage Over the Course of a Day?

    - by bobber205
    Are there any apps I an install on client machines that will generate a report for me what app used what % of the CPU. We have some machines that are running, at times, VERY slowly. The machine will run really poorly then boom, back up to full speed. There isn't usually enough time to check Task Manager real quickly to see what is running, not to mention the majority of the time there are people using the computer that don't know what the Task Manager is. ;) It would be really nice to take a look at some logs and see if, maybe, the anti virus is randomly taking alot of CPU for stretches of time. Or another application. Thanks! EDIT: This is for Windows XP. Sorry for the oversight. :)

    Read the article

  • VMXNET3 nic loses the ability to update ARP table after N hours

    - by Peter
    I have a fedora 18 VM that stops updating the arp table on eth1 after running for a number of hours to days. There are other VMs on the same hypervisor that can access all of the same networks without issue. A tcpdump of the offending NIC shows only ARP broadcasts but no responses. None of the other VMs on the vDS see the ARP broadcasts from the offending NIC. The only way I can currently solve the problem is to reboot the VM and then everything works for a while. I've tried changing the port on the vDS and even flipping the network configurations after I lose eth1's ARP table, but the ARP problem follows eth1 but I can access the machines that were originally on eth1. If I statically add the arp entries for machines on the same subnet I have no problems with connectivity. The Hypervisor is an HP BL49X series with flex-10 network modules. Has anyone seen anything like this before?

    Read the article

  • Help me a routing server on Windows XP

    - by Lu Lu
    I am developing client/server applications and need to test them on Internet environment. However, I only have one PC without connecting the public internet, so I have a plan to simulate a virtual internet environment on my PC. I think I will install the virtual machines for my purpose, but I can install only one machine because of my slowly PC. At the moment, my PC has 2 machines: the host machine (use Windows XP) & the virtual machine (use Windows Server 2003 Ent). Each machine will have 2 network adapter (host-only & internal), and in 2 network adapters, the internal adapter is private on each machine, and host-only adapters are connected together. With Windows 2003, routing server is ok, but I wonder on Windows XP, are there any routing server product? Please help me. Thanks.

    Read the article

  • Setting up a proxy for FTP - Windows

    - by RadiantHex
    Hi folks, I have 2 windows machines and a server. 1 is my laptop, the other is a workstation that the IP of which is white-listed on the server. The laptop has a dynamic IP, so the IP cannot be white-listed. I would like to be able to perform FTP transfers from my laptop to the server, while using the workstation as a proxy. Both machines are using Windows 7. Is this possible? Help would be amazing!

    Read the article

  • Sending SPAM free mail through my website

    - by Sara
    Hi, I've been battling with this issue for couple of months. I need to send bulk mail (not spam) through my social network to users in situations like newsletters, site invitations (when user imports their address book contacts) I'm using shared hosting and it limits 500 mails per hour. Even though i manage to send mails most of them end up in user's spam box. After researching these are the solutions that i finally came up with. 1) Use Google Apps SMTP (http://www.google.com/apps/intl/en/business/features.html) 2) Move into VPS 3) Use shared hosting with throttle enabled Please advise me on what to choose. Will using Google Apps prevent mail being sent as spam? I can't use other 3rd party SMTP like iContact or Aweber as "invitation sending script" will send emails to thousands of contacts, depending on user's addressbook. Thanks in advance

    Read the article

  • AIX network parameters to close TCP sockets of unplugged devices

    - by ADD Geek
    Hi there We have an AIX box, running what we call in banking "ATM Switch" not the ATM networking switch, but the bank ATM driver. where we have some ATM machines connected to two server processes. The problem is, when we disconnect any of these machines, the netstat -na| grep <port number> command shows that the socket established for this disconnected device is still established, we have to manually send a command from the software to make the socket aware that it is not live anymore. Is there a parameter on tcp level to make this connection aware within a minute or two that this device is not connected anymore? we had the following parameters set with root privileges: no -o tcp_keepidle=1000 no -o tcp_keepcnt=2 no -o tcp_keepintvl=150 no -o tcp_finwait2=100 it was originally having the default values. but even after we changed these parameters and restarted the server processes, the problem was still there.

    Read the article

  • Is there any way to isolate the python2.7 , mod_wsgi installation from main environment

    - by user31
    I have many local virtual machines for building the django websites. I find it very hard to configure all the machines with mod_wsgi , python and all that installation issues. Is there any way that i can install even python 2.7 , mod_wsgi etc and all that inside the virtual environment folder so that i can just copy paste that folder in my live server and i don't need to mess with mos_wsgi , python 2.7 and other issues. Is it possible or even any close variation of that so that puting the site to live servers is very easy and everything which is needed by site should be included locally I also face many problems when i need to move the django sites across servers

    Read the article

  • Virtualbox PXE Boot Failing with a Windows Server 2008 R2 Server

    - by Vbitz
    Some fast help on this would be good, I have been on this problem for 14 hours. In a Virtualbox test environment I have 2 virtual machines networked together using a internal network (no traffic runs though the host, it is all at a software level). One is a fresh client with 512mb of ram and a dual core set-up, the other is the server with 1.5GB of ram and running server 2008 r2. The server is configured as a dns server, dchp server, domain controller and also serves PXE booting though WDS (Windows Deployment Services). Both machines can see each other and I am able to start a network boot. The issue comes at the second to last stage of the pre windows PE install. On TFTP download of boot.sdi it starts it but stops during the boot process.

    Read the article

  • How do I figure out what is changing the userWorkstations attribute in Active Directory?

    - by Martin
    I just took over the IT for a medium sized business with a three domain controllers (2003/2008 Standard) and whenever I create a new user, after some time the user account cannot log into most machines on the network. I have traced this back to the "Log On To..." area becoming populated with a small list of machines. Even when I set the option to all computers, this list comes back after some time. I started hunting for vbs and ps1 scripts with the word "workstations" in them on all domain controllers to see if there is some kind of script to blame, but I have thus far come up empty handed. Is there a known software suite that can cause this (Microsoft Forefront, etc)? How can I figure out what is causing this list to change?

    Read the article

  • How to prevent Mac OS X creating .DS_Store files on non Mac (HFS) Volumes?

    - by sudo petruza
    Is there a way to prevent Mac OS X creating .DS_Store and other hidden meta-files on foreign volumes like NTFS and FAT? I share an NTFS partition with data like Thunderird & Firefox's profiles and apache's DocumentRoot, between Mac OS X and Windows, which is very handy. I don't mind if Mac OS X is not capable of indexing or otherwise doing the neat things those metafiles are for. Note: It's not shared over a network, both operating systems and the shared partition coexist on the same disk, on the same machine.

    Read the article

  • Good option for a transparent internet gateway on Mac OS X

    - by Gareth
    Hi I have a small network of Mac systems, and would like to add some internal monitoring of our internet usage, which has recently begun to climb. I would like to configure one of the machines as an internet gateway, and install some monitoring software that could provide graph indications of network usage by machine. The machine would then double as a workstation and as the internet gateway. I can manually configure the machines on the network to use it as a gateway, and would prefer to avoid an explicit http proxy (although it is an option if necessary). What software would serverfault users recommend to provide simple, easily configurable and maintainable network monitoring on Mac OS 10.5.7 (non-server)? The simplest requirement is monitor usage by IP Address, but additional tracking (e.g. destination, protocol, etc) would be useful.

    Read the article

  • How best to backup 6x Win2k3 Servers

    - by saille
    We have a external HP LTO3 tape drive. It needs to backup 6 Windows 2003 machines every night. Servers are HP DL380 G3 and the tape drive is attached locally to one of them via SCSI. On a budget of $0, and a goal of keeping-it-simple, what is going to be the best way to backup these machines? What software to use? NT Backup? Or does HP have something better for free? We don't need image backups - file system + system state will be adequate. Do we need to copy the files to be backed up onto the machine with the tape drive attached? Edit: Let me ask a more focussed question: Would you use NT Backup or something else? No soap boxing please, we've after some quick advice from someone who's used a similar setup.

    Read the article

  • Disable Acer eRecovery system

    - by Joel Coehoorn
    The meat of this question is that I'm looking for a way to either require a password before using a recovery partition or "break" the recovery partition (specifically, Acer eRecovery) in a way that I can later "unbreak" only by booting normally into windows first. Here's the full details: I have a set of new Acer Veriton n260g machines in a computer lab. A lot work went into setting up this lab to work well - for example, Office 2007 and other programs needed by the students were installed, all windows updates are applied, and a default desktop is setup. All in all it's several hours work to fully set up one machine. Unfortunately, I don't currently have the ability to easily image these machines, and even if I did I would want to avoid downtime even while an image is restored. Therefore, I've taken steps to lock them down — namely DeepFreeze and a bios password to prevent booting from anywhere but the frozen hard drive. DeepFreeze is an amazing product — as long as you boot from the frozen hard drive, there is no way to actually make permanent changes to that hard drive. Anything you do is wiped after the machine restarts. It lets me give students the leeway to do what they want on lab computers without worrying about them breaking something. The problem is that even with the bios locked and set to only boot from the hard drive, these Acers still have a simple way to choose a different boot source: shut them down and put a paper click in a little hole at the top while you turn it on again. This puts them into the "Acer eRecovery" mode. This by itself is no big deal — you can still power cycle with no impact. But if you then click through the menu to reset the machine (we're now past the point of curiosity and on to intent) it will wipe the hard drive and restore it to the original state. Of course, a few students have already figured this out and reset a couple machines. That's unfortunate, but inevitable. I don't want to destroy the ability to do this entirely (which I could by repartitioning the drives to remove the recovery partition) but I would like a way to require a password first, or "break" the recovery system in a way that I can "unbreak" only if I first un-freeze the hard drive in DeepFreeze. Any ideas?

    Read the article

  • Best way to duplicate databases nightly?

    - by Margaret
    Hey all We just got two new servers, that are running Windows Server 2008. The intent is to make the machines pretty much identical, copying the content of the master to the slave on a nightly basis, so that if anything fails, the second copy can stand in immediately. It doesn't need to be up-to-the-minute mirroring, though I suppose that wouldn't hurt if performance is not affected. The two machines will, amongst other things, each be running an instance of SQL Server 2008. The aim is to duplicate the databases on the master down to the slave on a nightly basis. Unless I'm misunderstanding, the slave databases in mirrored databases require the primary to be present to work correctly; I'm hoping for some solution where we have a second machine that can be up and running with minimal downtime if the first one falls over. Am I misunderstanding mirroring? Is that the best way to do things, or should I use some other mechanism? If so, what?

    Read the article

  • Intermittent Disconnection of Client Computers from Domain Server

    - by dilip nagle
    The Background: I have Windows 2008 server Enterprise Version with 25 user cal licences. It has a domain and all users and a network shared HP printer in it. The Server has two network cards and both these cards as well as all client machines are on IP addressing scheme of 192.168.1.* with subnetmask 255.255.255.0. Of the two network cards viz. 192.168.1.231 and 192.168.1.233, only 192.168.1.231 is registered with DNS. In 192.168.1.233(i.e. 2nd network card) has default getway as 192.168.1.231 and dns address as 192.168.1.231. The Server has three hard disks with capacities as 500gb, 500gb and 1TB and are partitioned as (C,D,E), (F,G) and (K) with partition K having all user data into various Shared Folders. Each of these folders(On Partition K), are mapped onto each user's computer as per the right of access given to them. The Problem: The Server was installed about 6 months ago and till date not even once, the Server has Hung or has given any problem. All the Clients computers are able to run the web based software from their computers via ip address, e.g. http://192.168.1.231/webERP/default.aspx. However, occassionally, when any client computer tries to browse network mappings, it hangs. Again, there is no fixed pattern. This may happen after running smoothly for say 3 days. On each Client's machine, the network settings are as follows: IP Address: 192.168.1.* where * is 1,2,3 .... Sunnetmask: 255.255.255.0 defauly getway: 192.168.1.231 Which is a server card and DNS address. preferred DNS Server: 192.168.1.231 In Advanced Tab under Wins: LMHostLookup is Unticked and default is radio buttoned. Ideally, I would have loved to have Disabled NETBIOS over TCP/IP but some network printers do not get accessed if this option is enabled(ie. Radio Buttoned). Bacause Disabling Netbios will drastically reduce traffic of NETBIOS broadcasting to all the computers on the net to do naming resolution. On Server, I have WINs Running which I have Scavanged Records, verified Database Integrity etc, removed Tombstoned Records etc. The Critical Errors shown only once a day when the server is statred are 4224(WINS) and 12923 - Server Licencing failed to Update DNS Record. I fail to understand as why do client machines HANG when they try to browse mapped network shared folders on K Drive. Kindly Advice

    Read the article

  • VPN/AFP server for centralized TimeMachine backups

    - by Keith Johnson
    I am a sysadmin for a small group of about 7 people who prefer Apple machines for their work. These machines are currently either a) not backed up at all, or b) backed up using Retrospect(Which I'm not very fond of). I don't really have the budget for anything fancy, and I'd like to keep it as user friendly as possible. Ideally I am thinking of a VPN server they can connect to(to keep the traffic secure, and because they work from home frequently) along with an AFP server for use with TimeMachine. The goal would be to get better backup coverage, along with user-initiated restores and overall ease of use. Does this seem like a reasonable idea? Has anyone done this before? Are there any obvious problems I've overlooked?

    Read the article

  • Snow Leopard: Optimization

    - by Shyam
    Hi, I have bunch of questions: I have a Mac network, which has five Mac's. Right now, they are individually getting software updates. Is there a way to download the patches/security updates in a single place (repository) and point all machines to this location? Personally, I have tools like Monolingual and Onyx, but are there tools you could recommend that affect the performance of the Operating System positively? Tweaks would be nice. Links and pointers, would be really appreciated. I've read about Time machine, is there a way to backup all machines to a network drive using this tool? Thanks!

    Read the article

  • Replacing a W2K3 Domain Controller - what do I need to know?

    - by Marko Carter
    I have a network of around 70 machines, currently with two DCs both running Windows Server 2003 (DC0 & DC1). DC0 is a five year old Poweredge 1850 and has recently become increasingly flakey, and in the past fortnight has fallen over twice. I want to replace this machine, but I'm cautious as there is huge scope for this sort of thing to go wrong. The way I imagine doing this is building a new machine then doing a DCPROMO and running three domain controllers for a month or so until I'm happy that everything is working as it should be before retiring the old machine. Particular areas of concern are the replication of roles from the current controllers (GP settings for instance) and the ramifications of switching off the machine that has, up until now, been the 'primary'. If there are compelling reasons to use Server 2008 I'm willing to do so, however I don't know if this would cause problems with my exisiting 2003 machines. Any advice on best practice or previous experiences would be most welcome.

    Read the article

  • Strange behavior in networking between 64bit and 32bit

    - by Rob
    I'm having a strange behavior about my network setup. I have 2 laptops, one (Lenovo) with Windows 7 Professional 64-bit and another (Acer) with Windows 7 Ultimate 32-bit and a wireless router. I'm connecting these 2 using the router but with a strange behavior. I can ping both machines, as well as the router, but when i try to access their shared folders (\\computer_name\shared_folder) the connection starts to fail and I need to reboot both machines to get it working again. But this only happens sometimes, sometimes it works.

    Read the article

  • Overcrowded Windows XP Folders

    - by BlairHippo
    I know that, technically, an individual Windows XP directory can hold an immense number of files (over 4.29 billion, according to a quick Google search). However, is there a practical ceiling where too many files in one directory starts having an impact on reads to those files? If so, what factors would exacerbate or help the issue? I ask because my employer has several hundred XP machines in the field at client sites, and the performance on some of the older ones is getting "sludgy." The machines download and display client-defined images, and my supervisor and I suspect that our slacktastic approach to cache management could be to blame. (Some of the directories have tens of thousands of images in them.) I'm trying to gather evidence to support or contest the theory before spending time on a coding fix.

    Read the article

  • mount: mount to NFS server 'IPADDRESS' failed: RPC Error: Program not registered

    - by matt74tm
    I've got two Redhat5/CentOS systems which share a folder. I'm trying to change the shared folder location, but I ran into this error on the machine on which the folder is mounted... How can I correct this? I rebooted the computer but to no avail. Server1 - where its "mounted" /etc/fstab IPADDRESS2:/opt/programA/common/files /srv/server2-share nfs rw,intr 0 0 Server2 - where its "shared" /etc/exports /opt/programA/common/files IPADDRESS1/28(rw,insecure,sync,no_root_squash) Ran the following on Server2 root@server2 [~]# /etc/init.d/nfs start root@server2 [~]# rpcinfo -p program vers proto port 100000 2 tcp 111 portmapper 100000 2 udp 111 portmapper 100011 1 udp 875 rquotad 100011 2 udp 875 rquotad 100011 1 tcp 875 rquotad 100011 2 tcp 875 rquotad 100005 1 udp 892 mountd 100005 1 tcp 892 mountd 100005 2 udp 892 mountd 100005 2 tcp 892 mountd 100005 3 udp 892 mountd 100005 3 tcp 892 mountd root@server2 [~]# /etc/init.d/nfs status rpc.mountd (pid 10204) is running... nfsd (pid 10201 10200 10199 10198 10197 10196 10195 10194) is running... rpc.rquotad (pid 10189) is running...

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >