Search Results

Search found 20883 results on 836 pages for 'wont say'.

Page 626/836 | < Previous Page | 622 623 624 625 626 627 628 629 630 631 632 633  | Next Page >

  • DMG mounting warning message says "it may make computer less secure or cause other problems"

    - by Cawas
    When I try to open a DMG file I get this: I'll just transcript the image: There may be a problem with this disk image. Are you sure you want to open it? Opening this disk image may make your computer less secure or cause other problems. What does that mean in fact? What's really wrong with it, and what kind of problem can it cause just by mounting? Someone said: When you download a file in Leopard (and Snow Leopard), it's marked as a quarantined file. This occurs by the OS adding an attribute to the file, tagging where it came from (such as "downloaded by Safari"). This is what causes the user to see prompts when running files that were downloaded from the Internet, you may remember being asked to confirm you'd like to launch program XXX downloaded by Safari on XXX date. As a new part of Snow Leopard, files which are tagged with the quarantine attribute also have integrity checked by fsck, and if that verify fails you will see the message you described, triggered by an unused node in the disc image. But really, I didn't get that. What's quarantine? I've just downloaded a file here on SL, tried to open, and got that warning. Apple have a say about quarantine files, and they seem to work the same on Leopards. Plus I have got that file using Google Chrome while that feature seems to work just with Safari.

    Read the article

  • MySQL-Cluster or Multi-Master for production? Performance issues?

    - by Phillip Oldham
    We are expanding our network of webservers on EC2 to a number of different regions and currently use master/slave replication. We've found that over the past couple of months our slave has stopped replicating a number of times which required us to clear the db and initialise the replication again. As we're now looking to have servers in 3 different regions we're a little concerned about these MySQL replication errors. We believe they're due to auto_increment values, so we're considering a number of approaches to quell these errors and stabilise replication: Multi-Master replication; 3 masters (one in each region), with the relevant auto_increment offsets, regularly backing up to S3. Or, MySQL-Cluster; 3 nodes (one in each region) with a separate management node which will also aggregate logs and statistics. After investigating it seems they both have down-sides (replication errors for the former, performance issues for the latter). We believe the cluster approach would allow us to manage and add new nodes more easily than the Multi-Master route, and would reduce/eliminate the replication issues we're currently seeing. But performance is a priority. Are the performance issues of MySQL-Cluster as bad as people say?

    Read the article

  • Why does "commit" appear in the mysql slow query log?

    - by Tom
    In our MySQL slow query logs I often see lines that just say "COMMIT". What causes a commit to take time? Another way to ask this question is: "How can I reproduce getting a slow commit; statement with some test queries?" From my investigation so far I have found that if there is a slow query within a transaction, then it is the slow query that gets output into the slow log, not the commit itself. Testing In mysql command line client: mysql begin; Query OK, 0 rows affected (0.00 sec) mysql UPDATE members SET myfield=benchmark(9999999, md5('This is to slow down the update')) WHERE id = 21560; Query OK, 0 rows affected (2.32 sec) Rows matched: 1 Changed: 0 Warnings: 0 At this point (before the commit) the UPDATE is already in the slow log. mysql commit; Query OK, 0 rows affected (0.01 sec) The commit happens fast, it never appeared in the slow log. I also tried a UPDATE which changes a large amount of data but again it was the UPDATE that was slow not the COMMIT. However, I can reproduce a slow ROLLBACK that takes 46s and gets output to the slow log: mysql begin; Query OK, 0 rows affected (0.00 sec) mysql UPDATE members SET myfield=CONCAT(myfield,'TEST'); Query OK, 481446 rows affected (53.31 sec) Rows matched: 481446 Changed: 481446 Warnings: 0 mysql rollback; Query OK, 0 rows affected (46.09 sec) I understand why rollback has a lot of work to do and therefore takes some time. But I'm still struggling to understand the COMMIT situation - i.e. why it might take a while.

    Read the article

  • VPN with client-to-client direct connectivity?

    - by Johannes Ernst
    When setting up a VPN, clients (say client1 and client2) usually authenticate to a server, and together the three constitute the VPN. When client1 wishes to send a packet to client2, this packet usually gets routed by way of server. Are there products / configuration blueprints for products where it is possible to send packets directly from client1 to client2 without going though server? (if the underlying network topology permits it, e.g. no firewalls in the way) If not, is there a way by which client1 can send a packet to client2 by way of server, without the server being able to snoop on the content of the packet? (E.g. because the packet is encrypted with the public key of client2) I just asked in the OpenVPN forum, and the answer I got was "not with OpenVPN". So my question is: are there other products with which this is possible? Open-source preferred ... One use case: client1 and client2, typically in separate offices, find themselves both at headquarters. Do they still need to talk to each other via the public internet? Links appreciated. Thank you.

    Read the article

  • Why can a local root turn into any LDAP user?

    - by Daniel Gollás
    I know this has been asked here before, but I am not satisfied with the answers and don't know if it's ok to revive and hijack an older question. We have workstations that authenticate users on an LDAP server. However, the local root user can su into any LDAP user without needing a password. From my perspective this sounds like a huge security problem that I would hope could be avoided at the server level. I can imagine the following scenario where a user can impersonate another and don't know how to prevent it: UserA has limited permissions, but can log into a company workstation using their LDAP password. They can cat /etc/ldap.conf and figure out the LDAP server's address and can ifconfig to check out their own IP address. (This is just an example of how to get the LDAP address, I don't think that is usually a secret and obscurity is not hard to overcome) UserA takes out their own personal laptop, configures authentication and network interfaces to match the company workstation and plugs in the network cable from the workstation to their laptop, boots and logs in as local root (it's his laptop, so he has local root) As root, they su into any other user on LDAP that may or may not have more permissions (without needing a password!), but at the very least, they can impersonate that user without any problem. The other answers on here say that this is normal UNIX behavior, but it sounds really insecure. Can the impersonated user act as that user on an NFS mount for example? (the laptop even has the same IP address). I know they won't be able to act as root on a remote machine, but they can still be any other user they want! There must be a way to prevent this on the LDAP server level right? Or maybe at the NFS server level? Is there some part of the process that I'm missing that actually prevents this? Thanks!!

    Read the article

  • Server Restart's and Respective Orders

    - by TheD
    EDIT:Not meaning to be disrespectful to any of the answers, but, the main question was whether rebooting a DC at the beginning of a cycle, then all the other servers, or rebooting it at the end once all the others are back online - is there a reason for doing it either way? I'm still not sure based on current responses. This will most likely seem like a fairly, maybe even stupid, question, but it's something I have been wondering about. As part of a regular process for clients servers are restarted remotely after patches and every client tends to have a similar order - but there always seems to be a small debate when it comes down to when do you reboot your DC. For example, 4 servers, 1 DC, 1xExchange, 1xBESX and 1xRandom, lets say it has some CRM software installed, is it best to reboot the DC first, then Exchange, then BESX and so on - or reboot all the servers, then reboot the DC last? - Perhaps it doesn't matter at all and it's just a case of how you have always done it. Would it change in a Hyper-V environment for example, with a physical DC, 1 VHost with all your servers virtualised on that Host? Rebooting the VHost and Virtual Machines first, then the DC at the end, or vice versa? Thanks!

    Read the article

  • Postfix Postscreen: how to use postscreen for smtp and smtps both

    - by petermolnar
    I'm trying to get postscreen work. I've followed the man page and it's already running correctly for smtp. But it I want to use it for smtps as well (adding the same line as smtp in master.cf but with smtps) i receive failure messages in syslog like: postfix/postscreen[8851]: fatal: btree:/var/lib/postfix/postscreen_cache: unable to get exclusive lock: Resource temporarily unavailable Some say that postscreen can only run once; that's ok. But can I use the same postscreen session for both smtp and smtps? If not, how to enable postscreen for smtps as well? Any help would be apprecieted! The parts of the configs: main.cf postscreen_access_list = permit_mynetworks, cidr:/etc/postfix/postscreen_access.cidr postscreen_dnsbl_threshold = 8 postscreen_dnsbl_sites = dnsbl.ahbl.org*3 dnsbl.njabl.org*3 dnsbl.sorbs.net*3 pbl.spamhaus.org*3 cbl.abuseat.org*3 bl.spamcannibal.org*3 nsbl.inps.de*3 spamrbl.imp.ch*3 postscreen_dnsbl_action = enforce postscreen_greet_action = enforce master.cf (full) smtpd pass - - n - - smtpd smtp inet n - n - 1 postscreen tlsproxy unix - - n - 0 tlsproxy dnsblog unix - - n - 0 dnsblog ### the problematic line ### smtps inet n - - - - smtpd pickup fifo n - - 60 1 pickup cleanup unix n - - - 0 cleanup qmgr fifo n - n 300 1 qmgr tlsmgr unix - - - 1000? 1 tlsmgr rewrite unix - - - - - trivial-rewrite bounce unix - - - - 0 bounce defer unix - - - - 0 bounce trace unix - - - - 0 bounce verify unix - - - - 1 verify flush unix n - - 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap smtp unix - - - - - smtp relay unix - - - - - smtp showq unix n - - - - showq error unix - - - - - error retry unix - - - - - error discard unix - - - - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - - - - lmtp anvil unix - - - - 1 anvil scache unix - - - - 1 scache dovecot unix - n n - - pipe flags=DRhu user=virtuser:virtuser argv=/usr/bin/spamc -e /usr/lib/dovecot/deliver -d ${recipient} -f {sender}

    Read the article

  • Redundant OpenVPN connections with advanced Linux routing over an unreliable network

    - by konrad
    I am currently living in a country that blocks many websites and has unreliable network connections to the outside world. I have two OpenVPN endpoints (say: vpn1 and vpn2) on Linux servers that I use to circumvent the firewall. I have full access to these servers. This works quite well, except for the high package loss on my VPN connections. This packet loss varies between 1% and 30% depending on time and seems to have a low correlation, most of the time it seems random. I am thinking about setting up a home router (also on Linux) that maintains OpenVPN connections to both endpoints and sends all packets twice, to both endpoints. vpn2 would send all packets from home to vpn1. Return trafic would be send both directly from vpn1 to home, and also through vpn2. +------------+ | home | +------------+ | | | OpenVPN | | links | | | ~~~~~~~~~~~~~~~~~~ unreliable connection | | +----------+ +----------+ | vpn1 |---| vpn2 | +----------+ +----------+ | +------------+ | HTTP proxy | +------------+ | (internet) For clarity: all packets between home and the HTTP proxy will be duplicated and sent over different paths, to increase the chances one of them will arrive. If both arrive, the first second one can be silently discarded. Bandwidth usage is not an issue, both on the home side and endpoint side. vpn1 and vpn2 are close to each other (3ms ping) and have a reliable connection. Any pointers on how this could be achieved using the advanced routing policies available in Linux?

    Read the article

  • Cooling for a small server room

    - by John Zwinck
    I have a server room about 12 feet square with an unfinished ceiling (exposed ducts and wiring). It houses a few servers (about ten, 1U and 2U) and some networking gear (four 1U switches, three routers, three modems, two cable boxes). With the door closed, it runs around 80 degrees Fahrenheit with half the servers turned on. When I turned on all the servers it reached 86 before I chickened out and propped the door open. The room is adjacent to air-conditioned office space, but does not itself have dedicated air conditioning. The ventilation for this room seems to be limited to one duct coming in at ceiling level, with a powered fan to draw air in, and one duct at ceiling level to allow air to flow out (it seems like it may just go into the drop ceiling cavity in the adjacent room). The adjacent office space stays fairly cool, but I'd prefer not to leave the door propped open all the time. There is both 110v and 208v service in the room, and plenty of power available. But there are no windows, and no floor drains (in a pinch we might be able to run a condensation hose through a small hole we'd drill in the wall to a nearby sink area, but only if absolutely necessary). I've considered portable A/C units, but I'm not sure on sizing and a lot less sure how we would run the exhaust hose(s). I suppose we could point one at the existing room exhaust duct (air return), but substantially modifying the duct is probably a no-no. I've also considered installing a fan box in the door of the room, but I'm concerned that this will only drop the temperature a little. Even right now, with all the equipment on, the room is at 83 degrees with the door open. And the main building A/C turns off daily at 6 PM to conserve energy, so the adjacent room temperature rises at night. How would you cool this room? Let's say the goal is to bring the temperature with everything running from a steady state of around 90 degrees down to 75 (equivalently, to offset the heat produced by ten 1U servers).

    Read the article

  • squid will not listen on specific IP

    - by Chris
    I have a public subnet with squid installed. Let's say my subnet is from 1.2.3.2 to 1.2.4.254. When I set the following: http_port 50000 http_port 1.2.3.2:50000 it is working. But when I set: http_port 1.2.4.37:50000 it is not working. It is only working with my first public IP from my subnet. The error I get when it is not working is the following: commBind: Cannot bind socket FD 10 to 1.2.4.37:50000: (99) Cannot assign requested address What is wrong with my configuration? @Zoredache I have a dedicated server and the whole subnet is defined as IP Range. When I use only the port or the first IP and doing netstat -ntlp | grep 46555 I get the following: tcp 0 0 0.0.0.0:50000 0.0.0.0:* LISTEN 25532/(squid ) When I use my specific IP, then the netstat query will show nothing, as the server does not start. I don't want squid to listen on all or the first IP, it is just a specific one it should listen to.

    Read the article

  • Are SATA II and SATA 3.0 Gbps compatible?

    - by Johnny Maelstrom
    I am trying to check that if I buy a new internal HDD it will work in the NAS I am buying. Currently I'm confused about naming schemes and once that is resolved whether there is compatibility. I will gladly author this question to be more general if there is not already an article helping with the confusion of SATA naming and standards. I see similar, but not identical questions and will accept this as a duplicate if thought as such. The specifications on the eCommerce site for the NAS says, "Controller Interface Type Serial ATA-150", the product home page for the manufacturer says, "Compatible with SATA and SATA II HDD". The specifications on the eCommerce site for the hard drives say, "Interface Type Serial ATA-300", the product home page for the manufacturer says, "Interface SATA 3.0 Gbps" Wikipedia says many things about different naming conventions, the closest being, "SATA II 3.0 Gbit/s, which was colloquially referred to as "SATA 3G" [bps] or "SATA 300" [MB/s] since 1.5 Gbit/s SATA I and 1.5 Gbit/s SATA II were referred to as both "SATA 1.5G" [b/s] or "SATA 150" [MB/s]). Therefore, they will operate with negligible differences between them." Are SATA II and SATA 3.0 Gbps the same? I feel I'm tantalisingly close to getting a definitive answer here before I purchase, but really want to clear up these naming schemes.

    Read the article

  • Are there any wireless webcams/cameras that Windows will recognize as a capture device?

    - by Keithius
    I'd like to have a webcam in a different room from my computer, and the distance means USB is out of the question. I know there are many wireless cameras, but what I can't seem to find out is if any of them would be recognized by Windows as a capture device (just like a locally connected USB webcam). Most of the wireless cameras I can find (e.g., D-Link DCS920; Cisco-Linksys WVC54GCA, etc.) can all stream video directly from the camera itself, which is fine if you're using the camera as a "security" camera (for private use only), but not for other uses (say, sending the video to an online video streaming service, e.g., Ustream). It seems like this should be possible; after all, wireless (WiFi) printers with scanners are recognized by Windows. Are there any wireless (WiFi) cameras out there that would be recognized by Windows as a capture device in the same way as a USB webcam would? Alternatively, a camera that's not wireless (e.g., connects via Ethernet) would do the trick too - but I imagine if anyone is going to make a remote camera like this, they'd go the extra step and make it wireless, too.

    Read the article

  • Importing csv list of contacts into Exchange 2007 GAL and create Distribution Group

    - by Ken Ray
    Here's the situation: We have a list of about 1,000 contacts (Lawyers in the area our court serves) with name and email address. I've been asked to create an email distribution list that can be used to sent emails to all of the external users on that list. I've seen various articles using the Exchange Management Shell and the Import-csv command piped through a ForEach-Object to a New-MailContact to set up the contacts. However, Exchange Management Shell is rather unhelpful, and it isn't working. What I believe I need to do is: 1) Set up a new distribution group using the Exchange Management Console. Let's say this new distribution group (which appears in the list of Distribution Groups under Recipient Configuration) is called "FloridaBar". 2) Make sure I have a csv file of the information I want to import. 3) Open Exchange Management Shell, and enter the following command: Import-csv C:\filename.csv | ForEach-Object { New-MailContact -Name $."NameColumnName" -ExternalEmailAddress $."EmailAddressColumn" -org FloridaBar Now, creating 1,000+ contacts in active directory - I assume that shouldn't be an issue. Do I have the "-org" parm wrong? Do I need to spell out the complete organization unit name (my.domain.name/Users/FloridaBar)? Is there a better way of doing this? Thanks in advance Ken

    Read the article

  • Which internet scenario would be better?

    - by JL
    I currently have an 8mbps (down) / 512kbps (up) telephone ADSL solution. I must say the reliability is excellent, and up until now its been the fastest connection I could get because I don't live in a cable zone. The real speed of my connection is around 7mbps, but sometimes I manage to get the full 8mbps. I use my connection for work, so it needs to be at least 99% reliable. Recently I was told by a guy who lives up the road that he has a wireless connection with an external antenna and his speeds are 20mbps / 512kbps - he's also paying about 1/2 of what I pay for my wired telephone connection. My question is, is wireless internet good enough for a power user who uses his connection for work 8 hours a day, including VPNing into servers remotely. Besides this I also enjoy playing the odd network game, not a WoW freak, but sometimes I do pick up the odd MMORPG and at times do indulge in some semi heavy gaming sprees. Will this wireless latency drive me crazy and seem slow in comparison? Will it be reliable enough, I also live in an area that snows heavily in winter. I guess its a question of - should I go wireless or not. I've only had 1 wireless connection before and that was years ago using iBurst technology and I remember it was terrible for VPN, but I guess the technology might have been improved since then? What do you guys think?

    Read the article

  • Hyper-V Virtual Machine Networking issues related to Max Ethernet Frame Size

    - by Goatmale
    I fixed an issue today earlier today but i'm interested in learning WHY it worked. We set up a new Hyper-V virtual machine only to discover that HTTP traffic wasn't working. HTTPS, pings, everything else was working fine. After months of prodding around I took a shot in the dark. On the Hyper-V host server, the physical NIC card had an advanced setting of "Max Ethernet Frame Size" set to 1500. After setting this setting to 1514 the issue was fixed. Alternatively, setting this to 1512 did not solve the issue; 1514 is the magic number. My best guess it that when this setting was set to 1500 it was allowing incoming pings because the data payload was a lot smaller of say, HTTP traffic. As far as HTTPS traffic, I read about something called "Path MTU discovery" which i'm going to assume why is HTTPs traffic was getting through fine, albeit slower. Looking at this post, people agree that 1518 is the max total frame size. Why didn't I need to change this to 1518 instead of 1514 bytes? Why is the default frame size 1500 if that's the max size of the Ethernet payload and not the max size.

    Read the article

  • If I scp a file through an intermediate server, is the file stored temporarily on the server?

    - by Blacklight Shining
    For the sake of simplicity (I find it easier to remember names than arbitrary letters), I will dispense with letters and use names to refer to the machines in this scenario. Say I have two machines, applejack and pinkie-pie, each on their own separate LANs and not in the same physical location. I also have a server, cadance, with a direct Internet-facing connection. I want to copy a file from applejack to pinkie-pie, so to avoid dealing with port forwarding and such, I set up an ssh tunnel from pinkie-pie to cadance (ssh -R etc cadance). Now I can connect to pinkie-pie from anywhere, by connecting to cadance and specifying an alternate port to use. I can also easily copy files to pinkie-pie with scp -P $that_port $some_file cadance:$some_path. My understanding of how it works is this: A secure connection is made from applejack to cadance I am authenticated to cadance A secure connection is made from applejack to pinkie-pie that spans the existing reverse tunnel and the new connection from step 1. I am authenticated to pinkie-pie Files are copied directly from applejack to pinkie-pie over this connection. Am I correct here? How secure is this approach? If I'm wrong…are files copied this way decrypted at cadance before being passed on to pinkie-pie? Is there a possibility that traces of unencrypted data could remain on cadance?

    Read the article

  • How to connect a VM running on an ESXi host to that host via a VMKernel NIC?

    - by Zac B
    Say I have an ESXi (5.0) host that runs a Linux distribution which hosts iSCSI targets, which contain the images for other VMs which the host will run. When it's used, I'll start the host first, then the iSCSI server, and then refresh all storage targets/HBAs in order to see the provided shares as online. I know it's a strange puzzle-box solution, but I was told to implement it. The ESXi host itself has a gigabit NIC which connects to the outside world. The guest OS (CentOS) supports VMXNet3, however, and if I can, I'd like to use its VMXNET3 NIC to host iSCSI for the ESXi host. How should I go about doing this? I went to create a new virtual network, and selected "VKernel", as it suggested that I use that type of network for SAN traffic, but it is apparently not set up for "self-hosted" SAN hosts, as the new network did not appear as an option to attach the CentOS box's VMXNET3 NIC to. How should I best connect an iSCSI host out to its "parent" ESXi host, if I need a) a 10gb connection, and (optionally) b) a VMKernel network for it?

    Read the article

  • NFS headaches with FreeBSD 4.9

    - by Ernie
    Once upon a time, this used to work, and I kept the configuration the same, but... now nothing. I'm just trying to get an NFS server set up on a FreeBSD 4.9 server. The process should be about as complicated as this: Add this entry to /etc/exports: /var/home /var/vpopmail/domains -maproot=root XXX.XX.XX.XXX Execute this: portmap nfsd -u -t -n 4 mountd -r Then this should work, regardless of network and firewall issues: showmount -e localhost But showmount -e localhost fails with the following error: RPC: Port mapper failure showmount: can't do exports rpc And even if I kill off the NFS daemon, and try a rpcinfo -p localhost, I get this error: rpcinfo: can't contact portmapper: rpcinfo: RPC: Unable to receive; errno = Connection reset by peer The portmapper is still running. Why the heck does nothing work as if it isn't? Edit to add: FYI: Sockstat gives me this: $ sockstat |egrep "(nfsd|portmap)" root nfsd 86310 3 udp4 *:2049 *:* root nfsd 86310 4 udp4 *:973 *:* root portmap 45920 0 tcp4 *:111 *:* Then, at a later time (say, 5 minutes) it's as if nfsd isn't acting as a server: $ sockstat |egrep "(nfsd|portmap)" root portmap 45920 0 tcp4 *:111 *:* But the nfs daemon is still running: $ ps ax |grep nfsd 86311 ?? I 0:00.00 nfsd: server (nfsd) 86312 ?? I 0:00.00 nfsd: server (nfsd) 86313 ?? I 0:00.00 nfsd: server (nfsd) 86314 ?? I 0:00.00 nfsd: server (nfsd)

    Read the article

  • Need help setting up OpenLDAP on OSX Mountain Lion

    - by rjcarr
    I'm trying to get OpenLDAP manually configured on OSX Mountain Lion. I'd prefer to do it manually instead of installing OSX server, but if that's the only option (i.e., OpenLDAP on OSX isn't meant to be used without server) then I'll just install it. I've seen guides that mostly just say to change the password in slapd.conf and then start the server and it should work. However, whenever I try to do anything with the client it tells me this: ldap_bind: Invalid credentials (49) I've tried encrypting the password as well as leaving it plain it doesn't seem to matter. The version is 2.4.28 and I've read as of 2.4 OpenLDAP uses slapd.d directories, but that doesn't seem to be the case in OSX. There was mention of an 'olcRootPW' I should use (instead of 'rootpw' in slapd.conf), but I only found that in a file named slapd.ldif. Anyway, I tried setting a password in there but it didn't make a difference). So ... I'm really confused. Has anyone got OpenLDAP working on OSX Moutain Lion without the server tools?

    Read the article

  • "Windows cannot find" file when opening Excel spreadsheet

    - by DanH
    For all of my Excel spreadsheets when I attempt to open them (by double-clicking in explorer) I get the message "Windows cannot find C:...". The files are there, and are valid zip files as seen by 7-Zip. There are no apparent lock files in the directories. I did just install Norton-360 over the weekend (replacing Kasperski), but the Norton log shows no events related to Excel. However, while installing Norton I did reboot with some Excel files open. Presumably something is hosed in my Excel configuration but I don't know what. Update (Before actually posting) -- I found an article that suggested turning off Advanced Option "Ignore other applications that use DDE", then doing excel.exe /unregister followed by excel.exe /register. I tried this but I suspect that the two Excel calls were ignored (Excel opened, but no obvious change). With that option off the spreadsheets load OK, but not with it on. And, curiously, spreadsheets load OK with the option on or off if I open Excel first and then open the spreadsheet in it. Does anyone have any idea what effect leaving that option off will have? Update 2 -- I tried running the "repair" option. It said it corrected a couple of config things (without saying what they were), but I still get a failure if I double-click an Excel file with the "Ignore other applications..." option checked. Update 3 -- I managed to fix this problem, but failed at the time to come back and say what I did, and now I can't remember for sure. But I think it had something to do with "Options"/"Save" and some of the values there. Something to do with AutoRecover, perhaps. (Possibly there was a file in recovery and I had to specify "Disable AutoRecover for this workbook" to let bring-up get past it. Or perhaps the AutoRecover file location was hosed.) Anyway, if it happens to someone else, and you find the fix, post it below and I'll mark it answered.

    Read the article

  • Symantec Endpoint Protection Virus Definitions

    - by Gus Denton
    I have done some Googling but I cannot get a definitive answer certainly not from the Symantec KB. I have a Virtualised Win 2003R2 server 32bit. It has been provisioned to me with Symantec Endpoint Protection 11.0.62xxx CLIENT (not a definitions server) the directory C:\Program Files\Common Files\Symantec Shared\VirusDefs is 750MB IT doesn't contain .tmp directories so it is NOT a corrupt definitions server. IT does contain directories named with a date pattern YYYYMMDD.xxx Some of these folders are 12 months old and I would like to recover the space. The sysmantect forums are full of this stuff but a lot of the postings contain links back to documents that are not specific to End Point Protection Client. It appears that I should be able to delete the older folders and all will be OK. with a service restart however there is a warning about having Live Update Administrator Installed Firstly I have no idea if I have this installed how to I check and secondly can I just ditch these old files and restart ? Regards Gus Denton Learning and Teaching Uni of New South Wales Sydney Australia For those trying to assist me I thankyou. I have followed some instructions found on the Symantec site and assumed that the response from Nixphoe would resolve my issue. It appears that as I am on a provisioned VM from a central IT unit I cannot run the Symantec commands from the Run prompt as my admin creds to get me in. (smc -stop) Basically I need to claw back some Diskspace from the c: drive which is being filed up with WSUS patches and Symantec files. I have managed to delete one symantec cache through the live update control panel and recovered 470Mb I suppose my last question for those more experienced than myself is, can I simply remove say the two oldest virus definition folders without completely foobaring the End Point protection and the server ? Regards Gus

    Read the article

  • java.lang.OutOfMemoryError: unable to create new native thread

    - by Brad
    I consistently get this exception when trying to run my Junit tests on my mac: java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:658) at java.util.concurrent.ThreadPoolExecutor.addIfUnderMaximumPoolSize(ThreadPoolExecutor.java:727) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:657) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:92) at com.google.appengine.tools.development.ApiProxyLocalImpl$PrivilegedApiAction.run(ApiProxyLocalImpl.java:197) at com.google.appengine.tools.development.ApiProxyLocalImpl$PrivilegedApiAction.run(ApiProxyLocalImpl.java:184) at java.security.AccessController.doPrivileged(Native Method) at com.google.appengine.tools.development.ApiProxyLocalImpl.doAsyncCall(ApiProxyLocalImpl.java:172) at com.google.appengine.tools.development.ApiProxyLocalImpl.makeAsyncCall(ApiProxyLocalImpl.java:138) The same set of unit tests pass perfectly fine on ubuntu and windows. Some information about my system resources on the mac: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 266 virtual memory (kbytes, -v) unlimited $ java -version java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07-334-10M3326) Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02-334, mixed mode) The reason I dont think this is an application issue is because the same tests pass in different environments. I have tried setting heap to 1024m, 512m and setting the stack to 64k and 128k (and each of these combinations) with no luck. My open files was originally 256 and I have bumped this to 1024. I have been googling around for a bit and all posts say to decrease heap size and increase stack size but that doesnt seem to help. Anyone have anymore ideas? EDIT: Here are is some environment information on my ubuntu box: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 20 file size (blocks, -f) unlimited pending signals (-i) 16382 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) unlimited virtual memory (kbytes, -v) unlimited file locks (-x) unlimited $ java -version java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07) Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode)

    Read the article

  • SQL Server transaction log backups,

    - by krimerd
    Hi there, I have a question regarding the transaction log backups in sql server 2008. I am currently taking full backups once a week (Sunday) and transaction log backups daily. I put full backup in folder1 on Sunday and then on Monday I also put the 1st transaction log backup in the same folder. On tuesday, before I take the 2nd transaction log backup I move the first transaction log backup from folder1 an put it into folder2 and then I take the 2nd transaction log backup and put it in the folder1. Same thing on Wed, Thurs and so on. Basicaly in folder1 I always have the latest full backup and the latest transaction log backup while the other transaction log backups are in folder2. My questions is, when sql server is about to take, lets say 4th (Thursday) transaction log backup, does it look for the previous transac log backups (1st, 2nd, and 3rd) so that this new backup will only include the transactions from the last backup or it has some other way of knowing whether there are other transac log backups. Basically, I am asking this because all my transaction log backups seem to be about the same size and I thought that their size will depend on the amount of transactions since the last transaction log backup. Can anyone please explain if my assumptions are right? Thanks...

    Read the article

  • iis7 large worker process request queue creating process blocking aspnet.config & machine.config amended (bottleneck)

    - by scott_lotus
    ASP.net 2.0 app .net 2.0 framework IIS7 I am seeing a large queue of "requests" appear under the "worker process" option. State recorded appear to be Authenticate Request and Execute Request Handles more than anything else. I have amended aspnet.config in C:\Windows\Microsoft.NET\Framework64\v2.0.50727 (32 bit path and 64 bit path) to include: maxConcurrentRequestsPerCPU="50000" maxConcurrentThreadsPerCPU="0" requestQueueLimit="50000" I have amended machine.config in C:\Windows\Microsoft.NET\Framework64\v2.0.50727\CONFIG (32 bit and 64 bit path) to include: autoConfig="true" maxIoThreads="100" maxWorkerThreads="100" minIoThreads="50" minWorkerThreads="50" minFreeThreads="176" minLocalRequestFreeThreads="152" Still i get the issue. The issue manifestes itself as a large number of processes in the Worker Process queue. Number of current connections to the website display 500 when this issue occurs. I dont think i have seen concurrent connections over 500 without this issue occurring. Web application slows as the request block. Refreshing the app pool resolves for a while (as expected) as the load is spread between the two pools. Application pool in question FIXED REQUEST have been set to refresh on 50000. Thank you for any help. Scott quick edit to say hmm, my develeopers are telling me the project was built with .net 3.5 framework. Looking at C:\Windows\Microsoft.NET\Framework64\v3.5 there does not appear to be a ASPNET.CONFIG or a MACHINE.CONFIG .... is there a 3.5 equivalent ? after a little searching apparenetly 3.5 uses the 2.0 framework files that 3.5 is missing. So back to the original question , where is my bottleneck ?

    Read the article

  • Web Server Routing Based On Location

    - by Eric
    I have a website that has users from both Hong Kong and Australia. Unfortunately, since the server is located in Australia, users from Hong Kong are going to suffer latency problems. Traffic has to go through US before travelling back to Australia. So I've setup a server in Hong Kong as well, and users using the .hk TLD are going to be redirected to the Hong Kong web server. It shares the same database server with the Australian server but due to aggressive SQL query caching, impact on performance from latency from SQL queries are negligible. But for users accustomed to the Hong Kong website but have since traveled to Australia, they suffer from additional latency because they go to the .hk site which redirects to the HK server even when they're in Australia. The website is targeted at international students from Hong Kong so this is an significant issue for me. Instead of redirecting users to the closest web server based on the TLD, how do I redirect users based on their location? Currently I am using nginx, postgres and Django. Say I know how to estimate users' location based on users' IP addresses, what is my next step? At what level would I work on? What topic should I read up?

    Read the article

< Previous Page | 622 623 624 625 626 627 628 629 630 631 632 633  | Next Page >