Search Results

Search found 2736 results on 110 pages for 'generating'.

Page 73/110 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • Windows 7: Image thumbnails fail to appear

    - by Fopedush
    All right, I've got a pretty strange one here. Since I installed Windows 7 on this machine some time ago, image thumbnails have never worked properly. For the vast majority of images, they completely fail to show up, showing the icon of the default image viewing application instead. Please note that the “Always show icons, never thumbnails” option in folder options is not checked. I've taken a screenshot demonstrating the problem, here: Sometimes, a few image thumbnails will show up correctly, maybe about one in ten, with the rest failing as well. Another screenshot, with a handful of thumbnails visible, can be seen here: Windows explorer does not appear to make any effort to populate the missing thumbnails, and there is no appreciable CPU usage. I can leave a window with missing thumbnails open all night and they will still never appear. Newly created images never generate thumbnails, only images that have been on the system since day one will occasionally show them. This leads me to believe that explorer is set to show thumbnails, but whatever process is supposed to be in charge of actually generating them has failed somehow. In previous versions of windows, explorer.exe itself was responsible for thumbnail generation – has this changed? Any suggestions at all – even if you aren't sure that they will work – are welcome. I'd hate to have to wipe and reinstall on this machine for such a minor annoyance.

    Read the article

  • Request bursting from web application Load Tests

    - by MaseBase
    I'm migrating our web and database hosting to a new environment on all new machines. I've recently performed a Load Test using WAPT to generate load from multiple distributed clients. The server has plenty of room to handle the traffic load, but I'm seeing an odd pattern of incoming traffic during the load tests. Here is the gist of our setup: Firewall server running MS Forefront TMG 2010 on Win 2k8 server Request routing done by IIS Application Request Routing on firewall machine Web server is a Hyper-V VM on the Database server (which is the host OS) These machines are hefty with dual-CPU's with six cores (12 total procs) Web server running IIS 7.5 Web applications built in ASP.NET 2.0, with 1 ISAPI filter (Url Rewrite) in front What I'm seeing during the load tests is that the requests all come through in bursts. Even though I have 7 different distributed clients sending traffic loads, the requests come through about 300-500 requests at a time. The performance monitor shows nearly all of the counters moving through this pattern, where a burst of requests comes in the req/sec jumps to 70, the queued requests jumps to 500, the current requests jumps up, the CPU jumps up, everything. Then once it's handled that group of requests, it has a lull for nearly 10 seconds where nearly nothing is happening. 0-5 req/sec, 0 queued requests, minimal CPU usage. Then after 10 seconds of inactivity, another burst comes through, spiking all of the counters once again. What I can't figure out is why the requests are coming through in bursts when I know that the load being generated is not sent that way, especially considering the various load-generating clients sending traffic all in different intervals with random think time's between each request. Is there something in the layers between Hyper-V or perhaps in the hardware which might cause this coalesce of requests together? Here is what i'm looking at, the highlighted metric is Requests/sec, but the others critical counter go with it: Requests Queued (which I'd obviously like to keep as close to 0 as possible). Any ideas on this?

    Read the article

  • ssh timeout issue connecting to an EC2 instance on OS X

    - by mamusr
    I am new to AWS and not a networking expert but curious to know more about it. I created a VPC with a public subnet only. Then i created an EC2 instance using an Ubuntu 14.04 64-bit pv AMI image (ami-e84d8480) as well generating the key pair needed to connect to it through ssh. I followed amazon's instructions to connect to an EC2 instance via ssh which did not work. Here is my attempted input and debug log: Running on OS X 10.9.4 user$ ssh -vvv -i key.pem [email protected] OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011 debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: /etc/ssh_config line 102: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to xxx.xxx.xxx.xxx [xxx.xxx.xxx.xxx] port 22. debug1: connect to address xxx.xxx.xxx.xxx port 22: Operation timed out ssh: connect to host xxx.xxx.xxx.xxx port 22: Operation timed out To attempt to resolve the issue: I enabled the SSH port. Tried different usernames other than ubuntu, like ec2-user and root. Initially set an inbound ssh rule in the security group to connect to only my ip address. When that did not work, i changed it to allow any ip to connect. But those actions did not fix the problem. Here are my guesses as to what i am missing in getting the EC2 instance connection to work. My etc/ssh_config file may be preventing the connection from taking place. I may have missed an important networking detail when setting up the VPC. I do not have a public ip address specified for the instance. I am connecting through the private ip address. My questions for the community: Am i going about it the wrong way connecting to the instance through the private ip address? if so, do i need to specify a public ip address for it to connect or some other method?

    Read the article

  • ssh key questions

    - by Tim
    I have some questions regarding generating keys for ssh access: (1) Supposed there are two computers running ssh server service and I have generated a pair of key files on computer A and copy the public file to computer B. Is it true that this is only a one-way key: We only gave computer A permission to access computer B, not gave computer B permission to access computer A? If I now want to ssh from computer B to computer A, must I generat another pair of key files on computer B and copy the public file to computer A? (2) If I would like to connect a single local computer to several remote servers, is it to generate a common pair of key files only once on the local and copy the same public file to the remote servers, or to generate different pair of key files on the local for different remote servers? (3) If I would like to connect several local computers to a single remote server, when copying the public files from different local computers to the remote server, is it to combine them together into a single authorized_keys file or store them in different authorized_keys files? (4) If there are several servers shared the same file system by, for example, NFS, how to generate keys and arrange the key files for accessing from one server to the other? Also how to still generate keys and arrange the key files for a local computer to access anyone of the servers? All the machines above are Linux.Please provide examples and commands in your reply so that I can better understand how to solve the problems. Thanks and regards!

    Read the article

  • Understanding Security Certificates (and thier pricing)

    - by John Robertson
    I work at a very small company so certificate costs need to be absolutely minimal. However for some applications we do Need to have our customers get that warm fuzzy not-using-a-self-signed certificate feeling. Since creating a "certificate authority" with makecert really just means creating a public/private key pair, it seems pretty clear that creating a public/private key pair FROM such a "certificate authority" really just means generating a second public/private key pair and signing both with the private key that belongs to the "certificate authority". Since the keys are signed anyone can verify they came from the certificate authority I created, or if verisign gave me the pair they sign it with one of their own private keys, and anyone can use verisigns corresponding public key to confirm verisign as the source of the keys. Given this I don't understand when I go to verisign or godaddy why they have rates only for yearly plans, when all I really want from them is a single public/private key pair signed with one of their private keys (so that anyone else can use their public keys to confirm that, yes, they gave me that public/private key pair and they confirmed I was who I said I was so you can trust my public/private key pair as belonging to a legitimate third party). Clearly I am misunderstanding something, what is it? Does verisign retire their public/private key pairs periodically so that my verisign signed key pair "expires" and I need new ones? Edit: I learned that the certificate has an internal expiration date and it also maintains an internal value stating whether it can be used to sign other certificates (i.e. sign other private/public key pairs stored as certificates). Can't I get a few (even one) non-signing certificate signed by someone like verisign that I can use for authentication/encryption without a yearly subscription?

    Read the article

  • Networking Problem MrxSmb event 50 "Delayed Write Failed" errors occurring all of the sudden

    - by Johnny Musso
    JUST THIS MONTH, we have started getting reports from a number of very stable clients that MrxSmb event id 50 errors keep appearing in their system event logs. Otherwise, they do not appear to have any networking problems except that there is a critical legacy application which seems to either be generating the MrxSmb errors or having errors occur because of them. The legacy application is comprised of 16 bit and 32 bit code and has not been changed or recompiled in many years. It has always been stable on Windows XP systems. The customers that have the problem usually have a small (5 clients or less) peer to peer network with all Windows XP systems. All service packs are loaded on the XP machines. Note: The only thing that seems to correct the problem is disabling opportunistic locking. I don't like this solution because it seems to slow down the network and sometimes causes record locking issues between users (on some networks). Also, this seems to have just started happening - as if a Windows update for XP has caused it? However, I have removed recent updates and it did not correct the issue. Thanks in advance for any help you can provide.

    Read the article

  • IIS7 - Web Deployment Tool - SetParam/SetParamFile to set http and https bindings + Cert

    - by Andras Zoltan
    Hi, we're currently using the MS Web Deployment Tool to sync a live website and some WebServices from a staging box to two live servers. The staging box hosts the site on any IP on port 17000, whereas the two live servers are load-balanced and have a different IP for each of them. At present, I generate two separate packages for deployment - one for each machine - using the sync operation and specifying a DestinationBinding parameter as follows: msdeploy -verb:sync -source:WebServer,computerName=localhost -dest:package="machinename.zip" -setParam:type="DestinationBinding",scope="SiteName",value="ip_address:port:". (Split across multiple lines to make it easier to read!) I run this twice, with a different target filename and ip address for each of the two machines. When it comes to deployment, I simply do a sync from each package to its respective live site. I know, I know - I should be able to do it by generating one parameterised package and then perhaps using the SetParamFile switch for each of the two Servers - believe me I'd like to, but the documentation on doing this is frankly non-existent. Now I need to configure and deploy both HTTP and HTTPS binding for this site; including also the ssl cert that is to be used. I've added an SSL binding for the site on the staging box - which uses a development cert (which will need to be replaced - or should the staging box be using the live cert?), and now the above command line has the effect of replacing the target IP on both http and https entries. It appears that I cannot specify multiple bindings plus the cert information in the DestinationBinding value in the -setParam above, so anyone know how would I go about doing this? Any help greatly appreciated.

    Read the article

  • Skip Corrupt Revisions During SvnAdmin Load

    - by cisellis
    I have a dump file that I am generating from VSS with the use of the VSS2SVN script. I've tested the generated dump file before and some of the revisions are corrupt for one reason or another (binary data or long path strings seem to be the main culprit). This is fine. In the past I have used svndumpfilter to split the dump file, remove the corrupt revisions and continue to load the repository. It worked but took a lot of manual effort to start the load, hit the bad revision, split the dump file, continue loading the repo, etc. This dump file is pretty large (~5GB) and takes several hours to load. I think I know the answer to this but is there any way to simply tell svnadmin load to keep going and skip corrupt revisions? I know how to verify, backup, etc. the dump file and don't need any of that. I don't care about recovering corrupt revisions. I just want to start the load, walk away, and not worry about checking it every few hours to manually remove the corrupt revisions. Is that possible? Thanks.

    Read the article

  • User-friendly program to create editable & searchable pdf files, like tax & application forms and su

    - by Nick Gorbikoff
    Hello. Can somebody recommend user-friendly program that will create ( or convert from Excel & Word, or OpenOffice) editable pdf forms. You know like tax forms, that some of us filled out. Where you can create a form with predefined format and stationary, but let user edit/fill out fields. I need something user-friendly that a regular person can use. I'm NOT looking for a pdf library ( I already use wkhtmltopdf for generating pdfs programmaticaly). The reason is that we have about 400 documents ( internal expense forms, traing forms, etc) in .doc and .xls format that we want to convert to editable pdf ( so that people don't have to fill them out by hand). Coding 400 templates and then converting them using some lib or command line tool - is not my idea of fun, espsecially since those form change all the time. I'd like to just give HR and Quality department the tool, so that they can maintain those documents. I looked at everything listed on this page ( http://www.cogniview.com/convert-pdf-to-excel/post/pdf-editing-creation-50-open-sourcefree-alternatives-to-adobe-acrobat/ ), but can't find what I need. Thank you!!!

    Read the article

  • Azure's Ubuntu 12.0.4 fails to install PHP5

    - by Alex Kennberg
    Similar to this article from Azure themselves: http://www.windowsazure.com/en-us/manage/linux/common-tasks/install-lamp-stack/ I am trying to install PHP5 on Ubuntu 12.0.4 virtual machine. However, it fails installing the ssl-cert. $ sudo apt-get install php5 Reading package lists... Done Building dependency tree Reading state information... Done php5 is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 49 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up ssl-cert (1.0.28) ... Could not create certificate. Openssl output was: Generating a 2048 bit RSA private key ............................+++ ...................................................................................................................+++ writing new private key to '/etc/ssl/private/ssl-cert-snakeoil.key' ----- problems making Certificate Request 140320238503584:error:0D07A097:asn1 encoding routines:ASN1_mbstring_ncopy:string too long:a_mbstr.c:154:maxsize=64 dpkg: error processing ssl-cert (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: ssl-cert E: Sub-process /usr/bin/dpkg returned an error code (1) Any tips appreciated.

    Read the article

  • Is there an IE8 setting or policy to make it work like IE7 with respect to persistent connections?

    - by Stephen Pace
    I am working with a commercial application running on XP using IIS 5.1. Periodically the application is returning an IIS error "There are too many people accessing the Web site at this time." This is caused by Microsoft artificially limiting the number of connections (10) under IIS 5.1 under Windows XP, but in this case, there is really only one user (albeit a few tabs open at a time). Microsoft suggests you can reduce the problem by turning off HTTP Keep-Alives for that particular web site: http://support.microsoft.com/kb/262635 If you use IIS 5.0 on Windows 2000 Professional or IIS 5.1 on Microsoft Windows XP Professional, disable HTTP keep-alives in the properties of the Web site. When you do this, a limit of 10 concurrent connections still exists, but IIS does not maintain connections for inactive users. I may do that; however, I'm worried about performance degradation. However, I also notice that IE8 appears to handle this differently than IE7. By default, IE6 and IE7 use 2 persistent connections while IE8 uses 6. Perhaps in this case IE8 itself is generating multiple connections in an attempt to be faster, but those additional connections are overwhelming the artificially limited IIS 5.1 on XP? Assuming that is the case, is there an Internet Explorer option, registry setting, or policy I can set to force IE8 to behave like IE7 with respect to persistent connections? I would not set this for all users, but for the small number of users that used this application, it might solve their intermittent problem until the application can be rehosted on Windows Server 2008. Thanks.

    Read the article

  • Dos/ Flood Lag even though Port not Saturated

    - by Asad Moeen
    My GameServers had been under some UDP Floods due to which they generated outputs to the attacker which gave the GameServers some huge lags. Thanks to friends at ServerFault that upon different kind of testing, I was able to successfully block the attack. My question is actually something else but it is important to know how the GameServers reacted to the attack and if the machine kept stable or not: 300kb/s Input would cause GameServer to generate 2mb/s Output. So as the Input Rate kept increasing, output rate would reach so high that it would no longer be possible for the GameServer to control it and hence it would give a huge Lag until the attack is stopped. Usually the game server starts to lag when it sends out something greater than 5mb/s and under that is controllable. Theoretically, I was able to receive a 60mb/s output from my GameServer on inputting 10mb/s. Its just the way the GameServer works if not protected. Now on some of my machines, only the GameServer under attack lagged and although the server was generating 60mb/s output, rest of the gameservers on other ports would run fine without lags on the same machine. But there was another machine which also runs on a 100 MBPS Network port, even 1 mbps input ( and ZERO output because attack is blocked ) even on an unused port would give a constant yellow line ( on the Lag-o-Meter ) to all the clients on all GameServers indicating lag because that line is actually blue under normal conditions. It would remain the same even on 50mbps or 900mbps input. I tried contacting the host about it because I believe its the way their Network is bridged, but they can't help me about it. Anyone else knowing about such issues because if 900mbps input does not Saturate the port, how can 1mbps input lag the servers although port is not saturated and enough bandwidth is available?

    Read the article

  • Cannot find FIS partition 'initramfs'......... need help!!!

    - by vikramtheone
    Hi Guys, I have a Ubuntu 9.04 Linux running on Freescale's i.MX515(ARM Cortex based) board with me. There were about 250 updates pending and I did that today, well some of the updates failed because of the infamous errors: E: dpkg was interrupted, you must manually run 'sudo dpkg --configure -a' to correct the problem. E: Couldn't rebuild package cache E: dpkg was interrupted, you must manually run 'sudo dpkg --configure -a' to correct the problem. So, when I do the 'sudo dpkg --configure -a' I get new errors related to FIS partition: Cannot find FIS partition 'initramfs' User postinst hook script [/usr/sbin/flash-kernel] exited with value 1 dpkg: error processing linux-image-2.6.28-18-imx51 (--configure): subprocess post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of linux-image-imx51: linux-image-imx51 depends on linux-image-2.6.28-18-imx51; however: Package linux-image-2.6.28-18-imx51 is not configured yet. dpkg: error processing linux-image-imx51 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-imx51: linux-imx51 depends on linux-image-imx51 (= 2.6.28.18.23); however: Package linux-image-imx51 is not configured yet. dpkg: error processing linux-imx51 (--configure): dependency problems - leaving unconfigured Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-2.6.28-18-imx51 Cannot find FIS partition 'initramfs' dpkg: subprocess post-installation script returned error exit status 1 Whats going wrong here, need help!!! I'm a newbie. Regards Vikram

    Read the article

  • check_snmp warning & critical thresholds with negative values

    - by Oesor
    I'm querying some signal level values measured in dBm, and the SNMP host on the remove device reports the values as negative values, ie, -90 dBm. However, check-snmp seems to be incapable of dealing with negative numbers as part of its threshold values. If I specify the values as part of a collection of OIDs, it accepts the syntax but converts the snmp value to positive, thus always generating a WARNING/CRITICAL result: root@ops-00:/usr/local/nagios/libexec# ./check_snmp -H 192.168.1.100 -o DEVICE-MIB::AverageReceiveSNR.0,DEVICE-MIB::CurrentNoiseFloor.0 -w 10:,~:-85 -c 15:,~:-80 -vvvv /usr/bin/snmpget -t 1 -r 5 -m ALL -v 1 [authpriv] 192.168.1.100:161 DEVICE-MIB::AverageReceiveSNR.0 DEVICE-MIB::CurrentNoiseFloor.0 DEVICE-MIB::AverageReceiveSNR.0 = INTEGER: 25 DEVICE-MIB::CurrentNoiseFloor.0 = INTEGER: -97 Processing line 1 oidname: DEVICE-MIB::AverageReceiveSNR.0 response: = INTEGER: 25 Processing line 2 oidname: DEVICE-MIB::CurrentNoiseFloor.0 response: = INTEGER: -97 SNMP CRITICAL - 25 *97* | DEVICE-MIB::AverageReceiveSNR.0=25 DEVICE-MIB::CurrentNoiseFloor.0=97 If I run it with a single OID, it gives me an error that the format is incorrect: root@ops-00:/usr/local/nagios/libexec# ./check_snmp -H 192.168.1.100 -o DEVICE-MIB::CurrentNoiseFloor.0 -w ~:-85 -c ~:-80 -vvvv Range format incorrect And if I run it with no thresholds defined, it works properly and returns the right value. This makes the graphs correct, however it'll never generate a notification when out of range: root@ops-00:/usr/local/nagios/libexec# ./check_snmp -H 192.168.1.100 -o DEVICE-MIB::CurrentNoiseFloor.0 -vvvv /usr/bin/snmpget -t 1 -r 5 -m ALL -v 1 [authpriv] 192.168.1.100:161 DEVICE-MIB::CurrentNoiseFloor.0 DEVICE-MIB::CurrentNoiseFloor.0 = INTEGER: -97 Processing line 1 oidname: DEVICE-MIB::CurrentNoiseFloor.0 response: = INTEGER: -97 SNMP OK - -97 | DEVICE-MIB::CurrentNoiseFloor.0=-97 What am I doing wrong here? How would I, for example, generate a CRITICAL when the noise floor is -80 dBm or higher, a WARNING when it's -85 to -80 dBm, and an OK when -85 dBm or lower? Do I have to write my own SNMP plugins when dealing with negative values?

    Read the article

  • How to make ssh connection between servers using public-key authentication

    - by Rafael
    I am setting up a continuos integration(CI) server and a test web server. I would like that CI server would access web server with public key authentication. In the web server I have created an user and generated the keys sudo useradd -d /var/www/user -m user sudo passwd user sudo su user ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/var/www/user/.ssh/id_rsa): Created directory '/var/www/user/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /var/www/user/.ssh/id_rsa. Your public key has been saved in /var/www/user/.ssh/id_rsa.pub. However othe side, CI server copies the key to the host but still asks password ssh-copy-id -i ~/.ssh/id_rsa.pub user@webserver-address user@webserver-address's password: Now try logging into the machine, with "ssh 'user@webserver-address'", and check in: .ssh/authorized_keys to make sure we haven't added extra keys that you weren't expecting. I checked on the web server and the CI server public key has been copied to web server authorized_keys but when I connect, It asks password. ssh 'user@webserver-address' user@webserver-address's password: If I try use root user rather than my created user (both users are with copied public keys). It connects with the public key ssh 'root@webserver-address' Welcome to Ubuntu 11.04 (GNU/Linux 2.6.18-274.7.1.el5.028stab095.1 x86_64) * Documentation: https://help.ubuntu.com/ Last login: Wed Apr 11 10:21:13 2012 from ******* root@webserver-address:~#

    Read the article

  • Ubuntu 64bit Xen DomU Issues after upgrade from Karmic to Lucid

    - by Shoaibi
    I was upgrading my servers today and it all went fine except the last machine which has the following issues: [Resolved using http://www.ndchost.com/wiki/server-administration/upgrade-ubuntu-pre-10.04#post-1004-upgradefinal-steps] No login prompt on console Done. Begin: Mounting root file system... ... Begin: Running /scripts/local-top ... Done. [ 0.545705] blkfront: xvda: barriers enabled [ 0.546949] xvda: xvda1 [ 0.549961] blkfront: xvde: barriers enabled [ 0.550619] xvde: xvde1 xvde2 Begin: Running /scripts/local-premount ... Done. [ 0.870385] kjournald starting. Commit interval 5 seconds [ 0.870449] EXT3-fs: mounted filesystem with ordered data mode. Begin: Running /scripts/local-bottom ... Done. Done. Begin: Running /scripts/init-bottom ... Done. Also tried by pressing ENTER and CTRL+C many times, no use. Resolved: [/tmp was mounted as noexec, changing that fix it]: I get errors when i try to re-install udev in single user mode: Unpacking replacement udev ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot Processing triggers for man-db ... Setting up udev (151-12.1) ... udev start/running, process 1003 Removing `local diversion of /sbin/udevadm to /sbin/udevadm.upgrade' update-initramfs: deferring update (trigger activated) Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-2.6.32-25-server /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/local-premount/fixrtc: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/local-premount/ntfs_3g: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/local-premount/resume: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/nfs-top/udev: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/panic/console_setup: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/init-top/all_generic_ide: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/init-top/blacklist: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/init-top/udev: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/init-bottom/udev: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/local-bottom/ntfs_3g: Permission denied

    Read the article

  • ISC DHCPD IPv6 for multiple interfaces

    - by Seoman
    I want to assign multiple IPv6 to a server with multiple NIC. As IPv6 RFC defines, each server has a unique DUID that can have one of the 3 formats (LL, LLT or enterprise). And each NIC has an IAID. So a request from NIC1 its the DUID and the IAID of the NIC1 and the request from NIC2 its the same DUID but the IAID its different. The problem is that from a Centos box, when I ask for an IP in 2 different interfaces, I get the same IP. I can't find how to specify host entry based on DUID and the IAID. I see some people generating a unique DUID based on the MAC of the NIC but this is not IPv6 RFC says. What I tried is: host entry1 { host-identifier option dhcp6.client-id 00:01:00:01:19:fc:f8:1c:52:54:00:7e:c9:ec; option dhcp6.ia-na "00:09:40:5d"; fixed-address6 2001:db8:0:1::202; } host entry2 { host-identifier option dhcp6.client-id 00:01:00:01:19:fc:f8:1c:52:54:00:7e:c9:ec; option dhcp6.ia-na "00:7e:c9:ec"; fixed-address6 2001:db8:0:1::201; } This causes a Segmentation Fault in the client (what is scary...). I guess is not the right use for ia-na option but I don't see any other option.

    Read the article

  • how i can identify which process is making UDP traffic on linux?

    - by boos
    my machine is continously making udp dns traffic request. what i need to know is the PID of the process generating this traffic. The normal way in TCP connection is to use netstat/lsof and get the process associated at the pid. Is UDP the connection is stateles, so, when i call netastat/lsof i can see it only if the UDP socket is opened and it's sending traffic. I have tried with lsof -i UDP and with nestat -anpue but i cant be able to find wich process is doing that request because i need to call lsof/netstat exactly when the udp traffic is sended, if i call lsof/netstat before/after the udp datagram is sended is impossible to view the opened UDP socket. call netstat/lsof exactly when 3/4 udp packet is sended is IMPOSSIBLE. how i can identify the infamous process ? I have already inspected the traffic to try to identify the sended PID from the content of the packet, but is not possible to identify it from the contect of the traffic. anyone can help me ? I'm root on this machine FEDORA 12 Linux noise.company.lan 2.6.32.16-141.fc12.x86_64 #1 SMP Wed Jul 7 04:49:59 UTC 2010 x86_64 x86_64 x86_64 GNU/Linux

    Read the article

  • Thumbnail generation with imagemagick doesn't render the correct colors

    - by Bastien
    Generating thumbnails of PDFs with imagemagick sometimes renders incorrect colors. We're using an old version of imagemagick (6.5.7-8, that's the version installed on the heroku servers). Here is the command we're currently using: convert -size "725x1200>" -colorspace RGB -flatten -density 300 -quality 100 input.pdf output.jpg I've tried using different colorspaces like sRGB,YIQ,.. but none of them are rendering the color correctly. Using imagemagick-6.7.7-6 locally works so I've tried to bundle the 'convert' command within my application /bin directory, the command works but the result is still wrong, so it seems that the problem comes either from another imagemagick command used by 'convert' or from another library. Here's an example of the outputs: Correct output: http://i.stack.imgur.com/gf9eG.jpg Wrong output: http://i.stack.imgur.com/imUeD.jpg Strangely, with some pages of the same pdf the output is always correct. Any idea which library or command could be the issue, or if there is a proper set of options to pass to imagemagick to always get it right? Thanks in advance for your help.

    Read the article

  • Apache22 on FreeBSD - Starts, does not respond to requests

    - by NuclearDog
    Hey folks! I'm running Apache 2.2.17 with the peruser MPM on FreeBSD 8.2-RC1 on Amazon's EC2 (so it's XEN). It was installed from ports. My problem is that, although Apache is running, listening for, and accepting connections, it doesn't actually respond to any or show them in the log at all. If I telnet to the port it's listening on and type out an HTTP request: GET / HTTP/1.1 Host: asdfasdf And hit enter a couple of times, it just sits there... Nothing. No response requesting with a browser either. There doesn't appear to be anything helpful in the error log: [Sun Jan 09 16:56:24 2011] [warn] Init: Session Cache is not configured [hint: SSLSessionCache] [Sun Jan 09 16:56:25 2011] [notice] Digest: generating secret for digest authentication ... [Sun Jan 09 16:56:25 2011] [notice] Digest: done [Sun Jan 09 16:56:25 2011] [notice] Apache/2.2.17 (FreeBSD) mod_ssl/2.2.17 The access log stays empty: root:/var/log# wc httpd-access.log 0 0 0 httpd-access.log root:/var/log# I've tried with accf_http and accf_data both enabled and disabled, and with both the stock configuration and my customized config. I also tried uninstalling apache22-peruser-mpm and just installing straight apache22... Still no luck. I tried removing all of the LoadModule lines from httpd.conf and just re-enabled the ones that were necessary to parse the config. Ended up with only the following loaded: root:/usr/local/etc/apache22# /usr/local/sbin/apachectl -M Loaded Modules: core_module (static) mpm_peruser_module (static) http_module (static) so_module (static) authz_host_module (shared) log_config_module (shared) alias_module (shared) Syntax OK root:/usr/local/etc/apache22# Same results. Apache is definitely what's listening on port 80: root:/usr/local/etc/apache22# sockstat -4 | grep httpd root httpd 43789 3 tcp4 6 *:80 *:* root httpd 43789 4 tcp4 *:* *:* root:/usr/local/etc/apache22# And I know it's not a firewall issue as there is nothing running locally, and connecting from the local box to 127.0.0.1:80 results in the same issue. Does anyone have any idea what's going on? Why it would be doing this? I've exhausted all of my debugging expertise. :/ Thanks for any suggestions!

    Read the article

  • Clarification on signals (sighup), jobs, and the controlling terminal

    - by asolberg
    So I've read two different perspectives and I'm trying to figure out which one is right. 1) Some sources online say that signals sent from the controlling terminal are ONLY sent to the foreground process group. That means if want a process to continue running in the background when you logout it is sufficient to simply suspend the job (ctrl-Z) and resume it in the background (bg). Then you can log out and it will continue to run because SIGHUP is only sent to the foreground job. See: http://blog.nelhage.com/2010/01/a-brief-introduction-to-termios-signaling-and-job-control/ ...In addition, if any signal-generating character is read by a terminal, it generates the appropriate signal to the foreground process group.... 2) Other sources claim you need to use the "nohup" command at the time the program is executed, or failing that, issue a "disown" command during execution to remove it from the jobs table that listens for SIGHUP. They say if you don't do this when you logout your process will also exit even if its running in a background process group. For example: http://docstore.mik.ua/orelly/unix3/upt/ch23_11.htm ...If I log out anyway, the shell sends my background job a HUP signal... In my own experiments with Ubuntu linux it seems like 1) is correct. I executed a command: "sleep 20 &" then logged out, logged back in and pressed did a "ps aux". Sure enough the sleep command was still running. So then why is it that so many people seem to believe number 2? And if all you have to do is place a job in the background to keep it running why do so many people use "nohup" and "disown?"

    Read the article

  • Does MySQL have some kind of DoS protection or per-user query limit?

    - by Ghostrider
    I'm a bit at a loss. I'm running a MySQL database that's roughly 1GB data in indices combined on a dedicated Linux server. DB version is '5.0.89-community'. Configuration is controlled via cPanel. PHP actually runs elsewhere on a shared hosting. IP addresses are static and don't change. Access from remote IP address is properly configured. Website gets around 10K hits per day with each hit generating a a database query. Some of these queries are expensive (~1 sec execution time). All is fine and well until at some point DB server starts refusing connections from the client, claiming that specific user can't access the server from that IP. Resetting the server will always fix the problem for a day or two and then the same thing happens. There are some other DBs on that server, some of which are hit pretty hard on occasion but constantnly. One of the apps maintains several persistent connections since it does couple of updates per minute. Though I don't think it's related. What's driving me mad is that I can't figure out why server would start refusing connections. There is nothing in the logs. This server is a hosted dedicated server so hosting company created the OS image and I didn't write or go over every line of configuration. I'd do it but I'm at a loss as to where start looking. Any advice is appreciated.

    Read the article

  • Why does Exchange 2003 silently reject emails with large attachments?

    - by Cypher
    Our environment: Exchange Server 2003 Standard, single instance, running on Windows Server 2003 Standard. configured to not send/receive mail with attachments larger than 10 MB. NDRs are not enabled. The issue: When an external sender sends an email with an attachment larger than 10MB, Exchange, as configured, does not receive the message. However, the sender of that message does not receive any notifications from his own mail server that the message could not be delivered due to attachment size. However, if an external user tries to send an email to a non-existent user, they do receive a message from their mail server indicating that the user does not exist. Why is that, and is there anything I can do about it? It would be nice if the sender received notification that the attachment file size exceeds our limits and their message was never received... Update The Exchange server has a SpamAssassin box in front of it... could that have something to do with it? Here is one of the last lines from SpamAssassin's logs when searching for my test e-mails: mail postfix/smtp[19133]: 2B80917758: to=, relay=10.0.0.8[10.0.0.8]:25, delay=4.3, delays=2.6/0/0/1.7, dsn=2.6.0, status=sent (250 2.6.0 Queued mail for delivery) My assumption is that Spam Assassin thinks the message is OK and is forwarding it off to Exchange. Update I've verified that Exchange is receiving the message and generating an NDR. However, delivery of NDRs are disabled to prevent Backscatter. Is there something that I can do to get Exchange to send a bounce message to the sending mail server (or verify that message is being sent) so the sending mail server can notify its sender of the bounce?

    Read the article

  • Cannot Access Local Network Shares (Strange Schannel and lsass.exe issues)

    - by Fake
    When I browse to my own computer's shares by going to \\MYCOMPUTERNAME\ ; I cannot access any of the shares on my LOCAL machine (nor can I access them remotely) and it generates about 40 of the following errors in my system event log: The following fatal alert was generated: 10. The internal error state is 1203. Details: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Schannel" Guid="{1F678132-5938-4686-9FDC-C8FF68F15C85}" /> <EventID>36888</EventID> <Version>0</Version> <Level>2</Level> <Task>0</Task> <Opcode>0</Opcode> <Keywords>0x8000000000000000</Keywords> <TimeCreated SystemTime="2011-04-05T13:52:09.144278900Z" /> <EventRecordID>79628</EventRecordID> <Correlation /> <Execution ProcessID="552" ThreadID="672" /> <Channel>System</Channel> <Computer>DEVELOP4.CONTOSO.COM</Computer> <Security UserID="S-1-5-18" /> </System> <EventData> <Data Name="AlertDesc">10</Data> <Data Name="ErrorState">1203</Data> </EventData> </Event> Additonal information: The process that is generating the error is lsass.exe OS: Windows7 Professional x64 Joined to Domain: Yes I was able to access the shares locally in the past I am having the same issue on 3 other computers that have similar configurations Any help would be greatly appreciated, because I have no idea what's wrong. Thanks!

    Read the article

  • Knowledge and user generated content management system to track files, research, proposals, etc.?

    - by Eshwar
    I'll try keep it short. Here's the scenario: We have employees all over the world performing similar work i.e. research, generating powerpoint slides, word documents, graphics, etc. Many times a lot of this previous work can be reused for another future project. The current arrangement is email and phone calls which as you would agree is quick if you know where to look but otherwise archaic and very very inefficient. So I am looking for software that will allow me to do the following: Tag files e.g. an investor presentation on cellphone usage in kenya would be tagged investor, cellphone, kenya Manage references e.g. if we read something on the internet, should be able to paste that link in some fashion and tag it as above. Preferably cloud based so that it can be accessed by anybody and additionally would be nice (though NOT must) to have access levels (director, manager, everyone) A nice interface that non technically savvy folks can warm up to ;) A desktop app would be handy so that people don't always have to click upload or something A tree based system is inefficient in this case because content is usually linked across branches and also people might not quite agree on one format of a tree. Tagging works around this very nicely. What I have considered so far: Evernote (for its more professional look) Springpad (for its versatility with content) Mendeley (this is a research manager and in some ways ideal, but i fear its limited to PDFs) The goal is that when somebody wants to look for a document, they don't have to ask a colleague, they can just search with keywords and all relevant information shows up. Thanks!

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >