Search Results

Search found 1972 results on 79 pages for 'trick'.

Page 56/79 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • Why isn't 'Low Fragmentation Heap' LFH enabled by default on Windows Server 2003?

    - by James Wiseman
    I've been investigating an issue with a production Classic ASP website running on IIS6 which seems indicative of memory fragmentation. One of the suggestions of how to ameliorate this came from Stackoverflow: How can I find why some classic asp pages randomly take a real long time to execute?. It suggested flipping a setting in the site's global.asa file to 'turn on' Low Fragmentation Heap (LFH). The following code (with a registered version of the accompanying DLL) did the trick. Set LFHObj=CreateObject("TURNONLFH.ObjTurnOnLFH") LFHObj.TurnOnLFH() application("TurnOnLFHResult")=CStr(LFHObj.TurnOnLFHResult) (Really the code isn't that important to the question). An author of a linked post reported a seemingly magic resolution to this issue, and, reading around a little more, I discovered that this setting is enabled by default on Windows Server 2008. So, naturally, this left me a little concerned: Why is this setting not enabled by default on 2003, or If it works in 2008 why have Microsoft not issued a patch to enable it by default on 2003? I suspect the answer to the above is the same for both (if there is one). Obviously, we're testing it in a non-production environment, and doing an array of metrics and comparisons to deem if it does help us. But aside from this I'm really just trying to understand if there's any technical reason why we should do this, or if there are any gotchas that we need to be aware of.

    Read the article

  • What's the piece of hardware listening on Facebook's or Wikipedia's IP address?

    - by Igor Ostrovsky
    I am trying to understand how massive sites like Facebook or Wikipedia work, for my intellectual curiosity. I read about various techniques for building scalable sites, but I am still puzzled about one particular detail. The part that confuses me is that ultimately, the DNS will map the entire domain to a single IP address, or a handful of IP addresses in the case of round-robin DNS. For example, wikipedia.org has only one type-A DNS record. So, people from all over the world visiting Wikipedia have to send a request to the one IP address specified in DNS. What is the piece of hardware that listens on the IP address for a massive site, and how can it possibly handle all the load coming from the requests for users all over the world? Edit 1: Thanks for all the responses! Anycast seems like a feasible answer... Does anyone know of a way to check whether a particular IP address is anycast-routed, so that I could verify that this really is the trick used in practice by large sites? Edit 2: After more reading on the topic, it appears that anycast is not typically used for dynamic web content. Anycast is usually used for UDP (e.g., DNS lookups), or sometimes for static content. One interesting thing to note is that Facebook uses profile.ak.fbcdn.net to host static content like style sheets and javascript libraries. Each time I ping this name, I get a response from a different IP address. However, I can't tell whether this is anycast in action, or a completely different technique. Back to my original question: as far as I can tell, even a large site will have a single expensive piece of load-balancing hardware listening on its handful of public IP addresses.

    Read the article

  • Connecting to my home router web interface from work

    - by Joe
    Hi, I'm trying to connect to my home router web interface from work. I use dyndns, because I don't have a static IP at home, and it works perfectly from any other place except my workplace (update: I made a mistake, see edit below). When trying to access the web interface from work I get a "500 Server Error" with the code: SERVER_RESPONSE_RESET. I'm not trying to use any protocols such as remote desktop, I'm only trying to access the web interface. I can access any other web page from my workplace with no problems, and I think my router web interface is like any other web page, isn't it? I thought maybe my work place proxy blocks addresses of services like dyndns, so I also applied another trick. Since I have a web page on my own domain (say www.mydomain.com) which I can access from work, I tried adding a CNAME to my domain which is linked to the dyndns address (router.mydomain.com). This way if anyone enters the address router.mydomain.com from anywhere, they reach my home router web interface, and there's no way of knowing it's a dyndns address (or is there?). However, it still doesn't work from my workplace (I get the same error message). Any ides? Edit: I'm sorry to say I made a mistake earlier. I used to be able to access my home router web interface from my old workplace, and I thought it was still possible since I don't recall making any configuration changes. However, after reading the replies, I went over to my old workplace and checked, and it doesn't work from there either. I'm very sorry for giving out wrong and misleading information about my problem. So to summarize: my problem is that I can't access my home router web interface from anywhere.

    Read the article

  • Running KVM/XEN/Hyper-V VMs from a RAM disk, is this possible? Practical?

    - by Ausmith1
    Currently I'm using ESX (v3 and v4) to test a scripted OS (Windows 2003) and application install DVD. The DVD ISO (8GB) is mounted on a 1Gbps NFS datastore and the VMDK's (20GB) are on an SSD mounted via NFS over a 10Gbps link. It still takes a lot longer than I'd really like for to run through a test iteration and I'm wondering if mounting the virtual disks and ISO on a RAM disk on the same server as the hypervisor is running on would be worth my while. I can dedicate a server to this VM and 32GB of RAM in the system should be adequate to do the trick I'd guess. (1GB hypervisor OS, 28GB RAM disk and 2GB for the VM is < the 32GB available to me) Since hosting a RAM disk within ESX does not seem possible I'm open to trying KVM/Xen/Hyper-V. KVM would probably be my first choice of these three. Anyone out there tried this? Bear in mind this is purely for a test run of the installer, the VM will be discarded as soon as the test is completed so I'm not worried about losing data from the remote possibility of a power failure.

    Read the article

  • Numbering grouped data in Excel

    - by Jeff
    I have an Excel spreadsheet (2010) with data similar to this: Dogs Brown Nice Dogs White Nice Dogs White Moody Cats Black Nice Cats Black Mean Cats White Nice Cats White Mean I want to group these animals but I only care about species and color. I don't care about disposition. I want to assign group numbers to the set as shown here. 1 Dogs Brown Nice 2 Dogs White Nice 2 Dogs White Moody 3 Cats Black Nice 3 Cats Black Mean 4 Cats White Nice 4 Cats White Mean I was able to select all the species and colors, then from the data tab select 'advanced', then 'unique records only'. This collapsed the data so that I could number the visible rows. Then when I 'cleared' the filter I could easily just fill the blank areas under the numbers with the number above. The problem is that my real data has far too many rows for this to be practical. Also, the trick about entering 1 in the first cell, 2 in the cell below, selecting both then dragging the corner down to 'auto-number' doesn't seem to work when you're viewing filtered rows. Any way to do this?

    Read the article

  • Can't upgrade NVIDIA GeForce 310M display driver on Acer Aspire 5745PG

    - by Emerson
    I've been for days already trying to update my video driver. I have an Acer Aspire 5745PG with a "NVIDIA GeForce 310M" board, and I was trying to run Sony Vegas video editor with Boris Continunn plugins. It happened that some of the plugins, like BCC Text Extrude wouldn't work, showing the message "Insufficient depth resolution to run Blue". I then read somewhere that updating the display driver would do the trick. That was when my nightmares started, I lost already good 3 nights trying to sort this out, without success :( The display driver that was before (and that I current have after restoring) was the version 8.16.11.8997. First thing I tried was downloading the 8.17.12.6619 driver directly from Acer, which was shown as the latest version from Acer website: http://support.acer.com/product/default.aspx?modelId=2466 Running it would say "Diver Package Failure - Setup failed to read the required Display Driver to be used with this package" I then tried directly the NVIDIA own driver, which the latest was version 296.10: http://us.download.nvidia.com/Windows/296.10/296.10-notebook-win7-winvista-64bit-international-whql.exe That gave me similar error message :/ So after some researching I found out that some people had the same issue and they had to change the configuration file to allow the installer to recognize this NVIDIA board: http://forums.nvidia.com/index.php?showtopic=222904 That topic said to look for the "Device Instance Id" property of the "NVIDIA GeForce 310M" display , which I couldn't find, instead I found the "Hardware Id", which seemed to be the right one. I followed the instructions and changed the inf file first for the Acer installation, and after for the NVIDIA own driver. It actually managed to go ahead with the installation in both instances, but the only thing I got was a black screen, while the computer still apeared to be running fine. I had to hard reset, and then it would come back with generic vga driver. I could only get my display back using the recovery function. I imagine thousands of this notebook was sold, and it can't have its driver updated?? Could someone help me with this?? Thanks Echo

    Read the article

  • Disadvantages of enabling 'Low Fragmentation Heap' LFH on Windows Server 2003?

    - by James Wiseman
    I've been investigating an issue with a production Classic ASP website running on IIS6 which seems indicative of memory fragmentation. One of the suggestions of how to ameliorate this came from Stackoverflow: How can I find why some classic asp pages randomly take a real long time to execute?. It suggested flipping a setting in the site's global.asa file to 'turn on' Low Fragmentation Heap (LFH). The following code (with a registered version of the accompanying DLL) did the trick. Set LFHObj=CreateObject("TURNONLFH.ObjTurnOnLFH") LFHObj.TurnOnLFH() application("TurnOnLFHResult")=CStr(LFHObj.TurnOnLFHResult) (Really the code isn't that important to the question). An author of a linked post reported a seemingly magic resolution to this issue, and, reading around a little more, I discovered that this setting is enabled by default on Windows Server 2008. So, naturally, this left me a little concerned: Why is this setting not enabled by default on 2003, or If it works in 2008 why have Microsoft not issued a patch to enable it by default on 2003? I suspect the answer to the above is the same for both (if there is one). Obviously, we're testing it in a non-production environment, and doing an array of metrics and comparisons to deem if it does help us. But aside from this I'm really just trying to understand if there's any technical reason why we should do this, or if there are any gotchas that we need to be aware of.

    Read the article

  • Why *do* windows print queues occasionally choke on a print job

    - by Ian
    Y'know they way windows print queues will occasionally stop working with a print job at the head of the queue which just won't print and which you can't delete? Anyone know whats going on when this happens? I've been seeing this since the NT4 days and it still happens on 2008. I'm talking about standard IP connected laser printers - nothing fancy. I support a lot of servers and loads of workstations and see this happen a few times a year. The user will call saying they can't print. When you examine the print queue, which in my case will generally be a server based queue shared out to the workstations, you find a print job which you cannot cancel. You also can't pause it, reinitialize it, nothing. Stopping the spooler is the usual trick and works sometimes. However I occasionally see cases which even this doesn't cure and which a reboot is the only solution. Pause the queue, reboot, when it comes back up the job can then be deleted. Once gone the printer happily goes back to its normal state. No action is ever necessary on the printer. I regard having to reboot as last resort and don't like it. What on earth can be going on when stopping the process (spooler) and restarting it doesn't clear a problem? Its not linked to any manufacturer either. I've seen this on HPs, lexmark, canon, ricoh, on lasers, on plotters.... can't say I ever saw this on dot matrix. Anyone got any ideas as to what may be going on. Ian

    Read the article

  • Multiple *NIX Accounts with Identical UID

    - by Tim
    I am curious whether there is a standard expected behavior and whether it is considered bad practice when creating more than one account on Linux/Unix that have the same UID. I've done some testing on RHEL5 with this and it behaved as I expected, but I don't know if I'm tempting fate using this trick. As an example, let's say I have two accounts with the same IDs: a1:$1$4zIl1:5000:5000::/home/a1:/bin/bash a2:$1$bmh92:5000:5000::/home/a2:/bin/bash What this means is: I can log in to each account using its own password. Files I create will have the same UID. Tools such as "ls -l" will list the UID as the first entry in the file (a1 in this case). I avoid any permissions or ownership problems between the two accounts because they are really the same user. I get login auditing for each account, so I have better granularity into tracking what is happening on the system. So my questions are: Is this ability designed or is it just the way it happens to work? Is this going to be consistent across *nix variants? Is this accepted practice? Are there unintended consequences to this practice? Note, the idea here is to use this for system accounts and not normal user accounts.

    Read the article

  • Cisco 3560+ipservices -- IGMP snooping issue with TTL=1

    - by Jander
    I've got a C3560 with Enhanced (IPSERVICES) image, routing multicast between its VLANs with no external multicast router. It's serving a test environment where developers may generate multicast traffic on arbitrary addresses. Everything is working fine except when someone sends out multicast traffic with TTL=1, in which case the multicast packet suppression fails and the traffic is broadcast to all members of the VLAN. It looks to me like because the TTL is 1, the multicast routing subsystem doesn't see the packets, so it doesn't create a mroute table entry. If I send out packets with TTL=2 briefly, then switch to TTL=1 packets, they are filtered correctly until the mroute entry expires. My question: is there some trick to getting the switch to filter the TTL=1 packets, or am I out of luck? Below are the relevant parts of the config, with a representative VLAN interface. I can provide more info as needed. #show run ... ip routing ip multicast-routing distributed no ip igmp snooping report-suppression ! interface Vlan44 ip address 172.23.44.1 255.255.255.0 no ip proxy-arp ip pim passive ... #show ip igmp snooping vlan 44 Global IGMP Snooping configuration: ------------------------------------------- IGMP snooping : Enabled IGMPv3 snooping (minimal) : Enabled Report suppression : Disabled TCN solicit query : Disabled TCN flood query count : 2 Robustness variable : 2 Last member query count : 2 Last member query interval : 1000 Vlan 44: -------- IGMP snooping : Enabled IGMPv2 immediate leave : Disabled Multicast router learning mode : pim-dvmrp CGMP interoperability mode : IGMP_ONLY Robustness variable : 2 Last member query count : 2 Last member query interval : 1000

    Read the article

  • SeLinux blocking connection to sshd on Ubuntu 9.10

    - by Barton Chittenden
    When I try to log on to my laptop, which runs Ubuntu 9.10, the server rejects my login attempts. Checking /var/log/auth.log, I see the following: Feb 14 12:41:16 tiger-laptop sshd[6798]: error: ssh_selinux_getctxbyname: Failed to get default SELinux security context for tiger I googled for this, and ran across the following: http://www.spinics.net/lists/fedora-.../msg13049.html Here's the part that I think relates to the problem that I'm having: Quote: What's wrong on my system? Why it's not possible to login even if selinux is in permissive mode? Any suggestions? I'd start by trying to figure out why sshd isn't running in sshd_t (it seems to be running in sysadm_t). Paul. selinux mailing list selinux@xxxxxxxxxxxxxxxxxxxxxxx https://admin.fedoraproject.org/mail...stinfo/selinux Yes, sshd is running in sysadm_t: ps axZ | grep sshd system_u:system_r:sysadm_t 3632 ? Ss 0:00 /usr/sbin/sshd -o PidFile=/var/run/sshd.init.pi ls -Z /usr/sbin/sshd system_ubject_r:sshd_exec_t /usr/sbin/sshd Don't know why it's not sshd_t. I didn't modified something. It's a standard installation of sles11 with the default reference policy from tresys. Maybe this code snippet from policy/modules/services/ssh.te is responsible for that: Allow ssh logins as sysadm_r:sysadm_t gen_tunable(ssh_sysadm_login, true) Any ideas? Do you have boolean init_upstart set to on? if not try setting it to on. I do not believe ssh_sysadm_login boolean works currently but i may be mistaken. -- Yeah, setting init_upstart to on did the trick! THANK A LOT! Do you know why this prevents the user from logging in through ssh even if selinux is set to permissive?? Ok, so the million dollar question is "where do I set 'init_upstart=1'"? It's not clear from context which configuration file needs to be edited, and I'm not at all familiar with SELinux configuration.

    Read the article

  • Use a preferred username but authenticate against Kerberos principal

    - by Jason R. Coombs
    What I desire to do should be pretty simple. I have an Ubuntu 10.04 box. It's currently configured to authenticate users against a kerberos realm (EXAMPLE.ORG). There is only one realm in the krb5.conf file and it is the default realm. [libdefaults] default_realm = EXAMPLE.ORG PAM is configured to use the pam_krb5 module, so if a user account is created on the local machine, and that username matches the [email protected] credential, that user may log in by supplying his kerberos password. What I would like to do instead is create a local user account with a different username, but have it always authenticate against the canonical name in the kerberos server. For example, the kerberos principal is [email protected]. I would like to create the local account preferred.name and somehow configure kerberos that when someone attempts to log in as preferred.name, it uses the principal [email protected]. I have tried using the auth_to_local_names in krb5.conf, but this doesn't seem to do the trick. [realms] EXAMPLE.ORG = { auth_to_local_names = { full.name = preferred.name } I have tried adding [email protected] to ~preferred.name/.k5login. In all cases, when I attempt to log in as preferred.name@host and enter the password for full.name, I get Access denied. I even tried using auth_to_local in krb5.conf, but I couldn't get the syntax right. Is it possible to have a (distinct) local username that for all purposes behaves exactly like a matching username does? If so, how is this done?

    Read the article

  • Cannot delete flash9.ocx

    - by Kara Marfia
    Some kinda voodoo, indeed. Bought a new boot drive, and it's time to use the old drive for data. I thought I'd save some time by just wiping the unneeded system folders, instead of backing up, formatting, and restoring. Wups! I have a single Adobe file that absolutely will not be deleted. G:\Windows\SysWOW64\Macromed\Flash\flash9.ocx - though it may have started with a different file name. I'm able to rename it, oddly enough. To be clear, the drive is currently plugged in externally. So I can boot the computer, plug this drive in afterward, and immediately attempt to delete. "File in use" box reads "the action can't be completed because the file is open in another program". I'd format at this point, but it's my white whale, and I have to know if Adobe has inserted some nasty little registry hack - or whatever it is - making this impossible. Since I'm sure it'll come up, I've taken ownership of the file - and this was the trick preventing me from deleting anything else on the drive - full rights on the file permissions, you name it, I've fiddled with the file itself. I'm about to try uninstalling flash from the system drive, in case that aligns the planets properly. Sometimes I wish I were less stubborn, and could just format already.

    Read the article

  • "A disk read error occurred" after choosing to boot into Windows XP from GRUB

    - by kellogs
    "A disk read error occurred" appears on screen after choosing to boot into Windows XP from GRUB. [root@localhost linux]# fdisk -lu Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x48424841 Device Boot Start End Blocks Id System /dev/sda1 63 204214271 102107104+ 7 HPFS/NTFS Partition 1 does not end on cylinder boundary. /dev/sda2 204214272 255606783 25696256 af HFS / HFS+ Partition 2 does not end on cylinder boundary. /dev/sda3 255606784 276488191 10440704 c W95 FAT32 (LBA) Partition 3 does not end on cylinder boundary. /dev/sda4 276490179 312576704 18043263 5 Extended /dev/sda5 * 276490240 286709759 5109760 83 Linux /dev/sda6 286712118 310488254 11888068+ b W95 FAT32 /dev/sda7 310488318 312576704 1044193+ 82 Linux swap / Solaris Here, sda is a 160GB hard disk with quite a few partitions and 3 OSes installed. I am able to boot into Linux and Mac OS fine, but not into Windows anymore. The Windows system is located on /dev/sda1. I cannot recall how exactly have I used testdisk but it once said: Disk /dev/sda - 160 GB / 149 GiB - CHS 19458 255 63 The harddisk (160 GB / 149 GiB) seems too small! (< 169 GB / 157 GiB) Check the harddisk size: HD jumper settings, BIOS detection... So far I have tried to "fixboot" and "chkdsk" from a recovery console on the affected windows partition (/dev/sda1), the plug off power cord for 15 seconds trick, reinstalling GRUB, repairing the MFT and boot sector of the affected partition via testdisk, what next please? Thank you!

    Read the article

  • Is there any way to force my Linux box to always boot up with a self-assigned IP address?

    - by Jeremy Friesner
    This is perhaps an unusual request: I'm trying to get a Debian Linux box to always give itself a self-assigned IP address (i.e. 169.254.x.y) on boot. In particular, I want it to do that even when there is a DHCP server present on the LAN. That is, it should not request an IP address from the DHCP server. From what I can see in the "man interfaces" text, there is an option for "manual", and an option for "dhcp". Manual assignment won't do, since I need multiple boxes to work on the same LAN without requiring any manual configuration... and "dhcp" does what I want, but only if there is no DHCP server on the LAN. (A requirement is that the functionality of these boxes should not be affected by the presence or absence of a DHCP server). Is there a trick that I can use to get this behavior? EDIT: By "no manual configuration", I mean that I should be able to take this box (headless) to any LAN anywhere, plug in the Ethernet cable, and have it do its thing. I shouldn't have to ssh to the box and edit files to get it working each time it is moved to a different LAN.

    Read the article

  • How to set umask globally?

    - by DevSolar
    I am using a private user group setup, i.e. a user foo's home directory is owned by foo:foo, not foo:users. For this to work, I need to set the umask to 002 globally. After a quick grep -RIi umask /etc/*, it seemed for a moment that modifying the UMASK entry in /etc/login.defs should do the trick. It does, too -- but only for console logins. If I log in to my desktop, and open a terminal there, I still get to see the default umask 022. Same goes for files created from apps started through the menu. Apparently, the display manager (or whatever X11 component responsible) does source some different setting than a console login does, and damned if I could tell which one it is. (I tried changing the setting in /etc/init.d/rc, and no, it did not help.) How / where do I set umask globally (and for all users), so that the X11 desktop environment gets the memo as well? (The system is Linux Mint / Ubuntu, in case that changes anything...)

    Read the article

  • Creating a pseudoterminal to make sudo happy

    - by larsks
    I need to automate the provisioning of a cloud instance (running Fedora 17) for which the following initial facts are true: I have ssh-key based access to a remote user (cloud) That user has password-free root access via sudo. Manual configuration is as simple as logging in and running sudo su - and having at it, but I would like to fully automate this process. The trick is that the system defaults to having the requiretty option enabled for sudo, which means that an attempt to do something like this: ssh remotehost sudo yum -y install puppet Will fail: sudo: sorry, you must have a tty to run sudo I am working around this right now by first pushing over a small Python script that will run a command on a pseudoterminal: import os import sys import errno import subprocess pid, master_fd = os.forkpty() if pid == 0: # child process: now that we're attached to a # pty, run the given command. os.execvp(sys.argv[1], sys.argv[1:]) else: while True: try: data = os.read(master_fd, 1024) except OSError, detail: if detail.errno == errno.EIO: break if not data: break sys.stdout.write(data) os.wait() Assuming that this is named pty, I can then run: ssh remotehost ./pty sudo yum -y install puppet This works fine, but I'm wondering if there are solutions already available that I haven't considered. I would normally think about expect, but it's not installed by default on this system. screen can do this in a pinch, but the best I came up with was: screen -dmS sudo somecommand ...which does work but eats the output. Are there any other tools available that will allocate a pseudoterminal for me that are going to be generally available?

    Read the article

  • Encrypting peer-to-peer application with iptables and stunnel

    - by Jonathan Oliver
    I'm running legacy applications in which I do not have access to the source code. These components talk to each other using plaintext on a particular port. I would like to be able to secure the communications between the two or more nodes using something like stunnel to facilitate peer-to-peer communication rather than using a more traditional (and centralized) VPN package like OpenVPN, etc. Ideally, the traffic flow would go like this: app@hostA:1234 tries to open a TCP connection to app@hostB:1234. iptables captures and redirects the traffic on port 1234 to stunnel running on hostA at port 5678. stunnel@hostA negotiates and establishes a connection with stunnel@hostB:4567. stunnel@hostB forwards any decrypted traffic to app@hostB:1234. In essence, I'm trying to set this up to where any outbound traffic (generated on the local machine) to port N forwards through stunnel to port N+1, and the receiving side receives on port N+1, decrypts, and forwards to the local application at port N. I'm not particularly concerned about losing the hostA origin IP address/machine identity when stunnel@hostB forwards to app@hostB because the communications payload contains identifying information. The other trick in this is that normally with stunnel you have a client/server architecture. But this application is much more P2P because nodes can come and go dynamically and hard-coding some kind of "connection = hostN:port" in the stunnel configuration won't work.

    Read the article

  • virtual machines, dual booting and data disks on SSD

    - by stevemarvell
    This is in planning, so if I've got the strategy wrong, please let me know. There are multiple questions here, but I think they all degenerate to the same answers. The hardware is a laptop with a single SSD. I'm trying to not lose the performance of the SSD. I plan a native dual booting Windows (plus cygwin) and Linux machine which is my BYOD and represents the development environment. I keep the codebase on a shared partition (though sometimes this is an external thunderbolt SSD) which can be natively "mounted" by whichever OS is in operation. I boot into one or the other environments depending on the task in hand. Sometime I have to develop with windows tools, but generally, Linux is my preferred development environment. It would be ideal if I could VM the other OS and run either in either. I'm going to assume, because I've not found a sensible VM based solution, that I have get samba involved to share the code partition between VMs. Is this going to blow my SSD performance in the VM? The client also supplies me with a VM for the target environment, usually linux. This is not often suited to development and is used for testing only. I normally keep two copies of this, one as a sandbox and one which I deploy to using the client's preferred method. I keep these VM snapshots on the shared partition. The latter is interacted with over the network and so has no disk sharing requirements. However, it would be useful for the sandbox to be able to "mount" the code base from the natively running OS. Is this samba or nfs again, depending on the native OS? Am I missing a trick which allows this to all work smoothly with all four environments running at once without loosing the SSD performance?

    Read the article

  • Why can't I debug my ASP project through a remote desktop connection?

    - by Anthony Benavente
    I just asked this question in Stack Overflow but I figured this stack exchange forum is a better fit. It's been about a month of trying to figure out this problem and we've still not found a solution. We have about seven virtual machines on a server running Windows XP Professional w/ SP 3 all with Visual Studio Interdev and IIS 5.1 installed. Running the programs all work fine, but we just can't debug through remote desktop. When we are logged into the server console (through VM Sphere) and log into one of the virtual machines through there, we are able to debug properly. We figured the issue lies with some kind of permissions for Remote Desktop Users. We've tried nearly every article on the internet (exaggerating of course) and are about to give up hope. One more thing, when we are logged into the virtual machine through the server console and then remote in, the user that was logged into the console is kicked off but debugging works! Does remoting in trick the computer into giving us the correct permissions? I'm really not sure how it works. I know that this technology predates human history, but we are in the process of migrating from ASP Classic to ASP.NET Specs: - Windows XP Professional W/ SP3 - IIS 5.1 - Visual Studio 6 Interdev EDIT: By "debug" I mean running the project with breakpoints. Interdev doesn't stop at breakpoints.

    Read the article

  • Windows 8.1 Upgrade: I have to run everything as administrator now?

    - by Robert Dailey
    I was running Windows 8 x64 Professional before and I never had to run programs as administrator to get them to function fully. Examples: Chrome OpenVPN GUI I always have my user under the local "Administrators" group and also disable UAC by putting the slider for it at the very bottom. This always did the trick. After the Windows 8.1 upgrade, I run into a few issues: Running Chrome normally, the Chrome icon doesn't appear in the taskbar. Chrome won't run in the background. OpenVPN GUI has errors when launching. Running both as administrator (Right Click Run as Administrator) fixes those issues and they run perfectly. What has changed in Windows 8.1 upgrade to cause these "problems"? I'm an advanced user, I don't want to have to worry about administrative rights. Any advice on how to fix these problems? EDIT I also get prompted for administrator permission to delete directories under "Program Files" now... that never used to happen before. I can hit Continue and it will allow me to delete, but just another symptom of the problem...

    Read the article

  • a disk read error occurred [closed]

    - by kellogs
    Hi, ¨a disk read error occurred¨ appears on screen after choosing to boot into Windows XP from GRUB. [root@localhost linux]# fdisk -lu Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x48424841 Device Boot Start End Blocks Id System /dev/sda1 63 204214271 102107104+ 7 HPFS/NTFS Partition 1 does not end on cylinder boundary. /dev/sda2 204214272 255606783 25696256 af HFS / HFS+ Partition 2 does not end on cylinder boundary. /dev/sda3 255606784 276488191 10440704 c W95 FAT32 (LBA) Partition 3 does not end on cylinder boundary. /dev/sda4 276490179 312576704 18043263 5 Extended /dev/sda5 * 276490240 286709759 5109760 83 Linux /dev/sda6 286712118 310488254 11888068+ b W95 FAT32 /dev/sda7 310488318 312576704 1044193+ 82 Linux swap / Solaris sda is a 160GB hard disk with quite a few partitions and 3 OSes installed. I am able to boot into Linux and Mac OS fine, but not into Windows anymore. The Windows system is located on /dev/sda1. I can not recall how exactly have I used testdisk but it once said that ¨The harddisk /dev/sda (160GB / 149 GB) seems too small! (< 172GB / 157GB)¨ or something simillar. So far I have tried to ¨fixboot¨ and ¨chkdsk¨ from a recovery console on the affected windows partition (/dev/sda1), the plug off power cord for 15 seconds trick, reinstalling GRUB, repairing the MFT and boot sector of the affected partition via testdisk, what next please ? Thank you!

    Read the article

  • Faster, secure, protocol/code required for long-distance transfer.

    - by Chopper3
    I've ran into a problem and I'm looking for a new secure protocol/client/server that's faster over a 1Gb/s fibre link - let me tell you the story... I have a pair of redundant, diversely-routed, 1Gb/s links over a distance of around 250 miles or so (not dark fibre but a dedicated point to point link, not a mesh). At the 'client' end I have a HP DL380 G5 (2 x dual-core 2.66Ghz Xeon's, 4GB, Windows 2003EE 32-bit), at the 'server' end I have a HP BL460c G6 (2 x quad-core 2.53Ghz Xeons, 48GB, Oracle Linux 5.3 64-bit). I need to transfer around 500 x 2GB files per week from the client to the server machines per week - but the transfer NEEDS to be secure. Using both iPerf or regular FTP I can get ~80MB/s of transfer pretty consistently, which is great. Using WinSCP or Windows SFTP I can't seem to get more that ~3-4MB/s, at this point the server's CPU is 3% busy while CPU0 of the client goes to ~30% utilised. We've tried editing various TCP window sizes with little success. Both ends are connected to quite low-usage Cisco Cat6509's with Sup720's. I can replace the client machine with a newer machine and/or move it to Linux - but this will take time. Clearly these single-threaded secure Windows clients are introducing too much latency doing their encryption. So a few questions/thoughts; Are there any higher performing secure protocols or client software for Windows that I could try? I'm pretty protocol-gnostic so long as it'll work between Windows and Linux. Should I be using hardware to do the encryption, either in the client or the network parts? If so what would you recommend? I'm not convinced that just swapping the server would be that much faster, the CPU was only at 30% but then again that's higher than I'd have expected given the load - moving to Linux at the client end may be a better idea but would be quite disruptive. Am I missing a trick? Thanks in advance.

    Read the article

  • Enter network credentials as part of batch script

    - by Michael
    WinXP: I have several system services that are needed to run some machinery in my lab. The machine these services are running uses a lab login that has administrator rights. Our IS department, unfortunately, has it set up where at some point during the night the login "loses" the privilege level to start/stop these services. The account stays logged in, but the software controlling my hardware becomes unresponsive. In order to get things back up and running, I have to stop the system services and restart them. Because of the security settings, however, I have to re-enter the user password to start the service (even though the user was never logged out). That, I get the "This service cannot be started due to a logon failure" and I have to enter the password. What would be ideal is to have a batch script run before anyone gets into work that stops all of the necessary services, enters the user credentials when prompted, and then restarts them so that everything is ready for first shift to run. I assumed that using the Task Scheduler in Windows would work as it allows you to run batch files with a user's name and password, but this didn't seem to do the trick. With this setup I would arrive to find that all the services are stopped but not started again. (Presumably because the authentication failed.) The batch file is about as simple as it gets, all I have is: net stop "Service1" net stop "Serivce2" etc., then restart in reverse order based on dependency: net start "Service2" net start "Serivce1" What would it take to accomplish what I'm trying to do and restart the services?

    Read the article

  • Excel or Access: how to group several lines in a table and insert contents in columns? ("split column")

    - by Martin
    I have a table containing data of sold products (shown in the example on the left): Columns: Number of the order Product Name Attribute - specifies what is given in the following field "value", e. g. Customer Name or Product Variant Value - is the value of the Attribute Count - is the number of products of this variant sold in the order That means: Product B has 2 variants "c" and "d" Note that in Order 1 Product B was sold in Variant d only, because the letter "N" in field "D4" means "none". Note, that in OrdnerNo 3 Product B was sold only in Variant c, because for Variant d field "D9" is "N"!! This is confusing, but it is the structure of the original data (which I can not change). I need a way to convert the table on the left in a table like that on the right: one line for each product type Order Number Product Name Customer Name Count (number of products sold in this order) Variant - this is the problem, as it has to be filled with the So all rows with the same OrderNo and same product have to be grouped in to one, and I hope it is clear what I need. I tried to do it with Pivot Tables, but that fails, as the Count is always in each line, no matter if it has Value "N" or not and for the products without variants there is only one line for each order, however for products with variants there are several... So how could I create the right table with a VBA macro in MS Excel or maybe there is a trick in MS Access to do it directly or with an SQL query?

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >