Search Results

Search found 14124 results on 565 pages for 'auto generated'.

Page 434/565 | < Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >

  • Is it ok to share private key file between multiple computers/services?

    - by Behrang
    So we all know how to use public key/private keys using SSH, etc. But what's the best way to use/reuse them? Should I keep them in a safe place forever? I mean, I needed a pair of keys for accessing GitHub. I created a pair from scratch and used that for some time to access GitHub. Then I formatted my HDD and lost that pair. Big deal, I created a new pair and configured GitHub to use my new pair. Or is it something that I don't want to lose? I also needed a pair of public key/private keys to access our company systems. Our admin asked me for my public key and I generated a new pair and gave it to him. Is it generally better to create a new pair for access to different systems or is it better to have one pair and reuse it to access different systems? Similarly, is it better to create two different pairs and use one to access our companies systems from home and the other one to access the systems from work, or is it better to just have one pair and use it from both places?

    Read the article

  • External component has thrown an exception. ASP.NET ASPX PAGE POST

    - by Brandon
    I have an aspx page that communicates with a webservice I have. It connects to an SQL Server database on my virtual dedicated server. With just a little usage, I get this error External component has thrown an exception. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Runtime.InteropServices.SEHException: External component has thrown an exception. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [SEHException (0x80004005): External component has thrown an exception.] Luxand.FSDK.Initialize(String DataFilesPath) +0 WebService.onLoad() +70 WebService..ctor() +91 facematch.btn_submit_Click(Object sender, EventArgs e) +218 System.Web.UI.WebControls.Button.OnClick(EventArgs e) +105 System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) +107 System.Web.UI.WebControls.Button.System.Web.UI.IPostBackEventHandler.RaisePostBackEvent(String eventArgument) +7 System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) +11 System.Web.UI.Page.RaisePostBackEvent(NameValueCollection postData) +33 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +1746

    Read the article

  • Copy Database Wizard fails on creation of view into another not-yet-copied database

    - by user22037
    Update - I found that doing a manual detach/reattach using MSDN article "How to: Move a Database Using Detach and Attach (Transact-SQL)" got around this issue. I'll just be creating a script to dettach and reattach but do the file copies manually. Any info on how to overcome the problems with the wizard would be helpful in the future. I am in the process of moving around 20 databases from our current server to a new one. When performing the copies however I have found that some databases can not copy if they have views into other databases that have not yet been copied to the target system. The log file generated says "failed with the following error: "Invalid object name" in reference to the database in the view. If I first copy just the database referenced in the view and then in a separate step copy the database over containing the view it is successful. However some other database have views into each other so can't just adjust the order in which the copy occurs. Is there any way to ignore this error and just allow everything to copy?

    Read the article

  • How can I measure actual memory usage from my running processes?

    - by NullUser
    I have two servers, server1 and server2. Both of them are identical HP blades, running the exact same OS (RHEL 5.5). Here's the output of free for both of them: ### server1: total used free shared buffers cached Mem: 8017848 2746596 5271252 0 212772 1768800 -/+ buffers/cache: 765024 7252824 Swap: 14188536 0 14188536 ### server2: total used free shared buffers cached Mem: 8017848 4494836 3523012 0 212724 3136568 -/+ buffers/cache: 1145544 6872304 Swap: 14188536 0 14188536 If I understand correctly, server2 is using significantly more memory for disk I/O caching, which still counts as memory used. But both are running the same OS and if I remember correctly, I configured both with the same parameters when they were installed. I did a diff on /etc/sysctl.conf and they are identical. The problem is, I am collecting memory usage and other metrics over a period of time, (eg: vmstat, iostat, etc.) while a load is generated on the system. The memory used for caching is throwing off my calculations on the results. How can I measure actual memory usage from my running processes, rather than system usage? Is used - (buffers + cached) a valid way to measure this?

    Read the article

  • Upgrading PHP, MySQL old-passwords issue

    - by Rushyo
    I've inherited a Windows 2k3 server running an XAMPP-installation from the stone age. I needed to upgrade PHP to facilitate an upgrade to MediaWiki to facilitate a new MediaWiki extension (to facilitate some documentation to facilitate doing my job to facilitate getting paid to facilit... you get the idea). However... installing a new version of PHP resulted in PHP's MySQL libraries refusing to communicate using MySQL's 'old style' 152-bit passwords. Not a problem in theory. The MySQL installation is post-4.1, so it should have the functionality to upgrade the user's passwords from 152-bit to 328-bit (what a weird hashing algorithm...). I ran the following: SET PASSWORD = PASSWORD('foo'); on MySQL but querying: SELECT user, password FROM mysql.user; returned just the same password I started out with - 152-bit. Now... I suspect you're thinking 'AHA! old-passwords is on!'. Unfortunately it's not - I've disabled it in the configuration (explicitly set it to 0), made doubly sure I have an absolute reference to that configuration file and ensured the service isn't using the --old-passwords flag. The service was reset after each and every operation. So I went onto another system and generated the 328-bit hash on there, copying the hash over to the first MySQL instance. Unfortunately, that didn't work either (I did remember to FLUSH PRIVILEGES). The application error is: "'mysqlnd cannot connect to MySQL 4.1+ using the old insecure authentication. Please use an administration tool [...snip...] Is there anything else I can try to get PHP to recognise MySQL as not using the 'old insecure authentication'? MySQL seems to be stuck in 'old-passwords' mode and I can't get it out of it.

    Read the article

  • PHP application failed to connect after a network plugged back in

    - by tntu
    My data-center appears to have had some issues with their network and thus my server has suffered from on an off network connectivity for about an hour. After the connection has been completely re-established my code still kept reporting the same issue over and over until I have restarted the service. The code is a simple PHP code that loops forever checking the Apple feed-back server and then sleeps for a few minutes and then it begins all over again. Now I understand the error being generated if the network is down but once it got back up why did it continue until I have restarted the code? Does PHP have something that needs to be re-initialized or something?? Messges log: Dec 20 08:57:22 server kernel: r8169: eth0: link down Dec 20 08:57:28 server kernel: r8169 0000:06:00.0: eth0: link up Dec 20 08:57:29 server kernel: r8169: eth0: link down Dec 20 08:57:33 server kernel: r8169 0000:06:00.0: eth0: link up Dec 20 08:57:33 server kernel: r8169: eth0: link down Dec 20 08:57:37 server kernel: r8169 0000:06:00.0: eth0: link up Dec 20 08:57:38 server kernel: r8169: eth0: link down Dec 20 08:57:44 server kernel: r8169 0000:06:00.0: eth0: link up Dec 20 08:57:44 server kernel: r8169: eth0: link down Dec 20 08:57:52 server kernel: r8169 0000:06:00.0: eth0: link up Dec 20 08:57:52 server kernel: r8169: eth0: link down Dec 20 09:10:58 server kernel: r8169 0000:06:00.0: eth0: link up PHP Error: PHP Warning: stream_socket_client(): php_network_getaddresses: getaddrinfo failed: Name or service not known in /home/push/feedback.php on line 36 Code Line 36: $apns = stream_socket_client('ssl://feedback.sandbox.push.apple.com:2196', $errcode, $errstr, 60, STREAM_CLIENT_CONNECT, $stream_context);

    Read the article

  • How to optimize a postgreSQL server for a "write once, read many"-type infrastructure ?

    - by mhu
    Greetings, I am working on a piece of software that logs entries (and related tagging) in a PostgreSQL database for storage and retrieval. We never update any data once it has been inserted; we might remove it when the entry gets too old, but this is done at most once a day. Stored entries can be retrieved by users. The insertion of new entries can happen rather fast and regularly, thus the database will commonly hold several millions elements. The tables used are pretty simple : one table for ids, raw content and insertion date; and one table storing tags and their values associated to an id. User search mostly concern tags values, so SELECTs usually consist of JOIN queries on ids on the two tables. To sum it up : 2 tables Lots of INSERT no UPDATE some DELETE, once a day at most some user-generated SELECT with JOIN huge data set What would an optimal server configuration (software and hardware, I assume for example that RAID10 could help) be for my PostgreSQL server, given these requirements ? By optimal, I mean one that allows SELECT queries taking a reasonably little amount of time. I can provide more information about the current setup (like tables, indexes ...) if needed.

    Read the article

  • Get Matrox Millenium video card working in Ubuntu 9.10

    - by wcoenen
    I have installed Ubuntu 9.10 on an old PC and it is mostly working, except for some heavy drawing defects that show up whenever I start dragging a window or scrolling inside a window or menu. It looks like the video driver copies the rectangle being moved to the wrong location. I have taken a look in /var/log/Xorg.0.log and the following line shows the detected video card: (--) PCI:*(0:0:8:0) 102b:0519:0000:0000 Matrox Graphics, Inc. MGA 2064W [Millennium] rev 1, Mem@ 0xf9800000/16384, 0xfb000000/8388608, BIOS @0x????????/65536 (==) Using default built-in configuration (30 lines) (==) --- Start of built-in configuration --- Section "Device" Identifier "Builtin Default mga Device 0" Driver "mga" EndSection How do I fix the drawing defects? It turned out that the 24 bit color depth (automatically selected by ubuntu 9.10) was the problem; apparantly the mga driver doesn't handle this well for cards with little memory. I took the following steps to resolve the issue (you can skip the first three steps if you already have a semi-working xorg.conf file): Reboot ubuntu in recovery mode, to get a root console without X running. Run Xorg -configure to generate a xorg.conf.new file Copy the file to /etc/X11/xorg.conf with cp xorg.conf.new /etc/X11/xorg.conf (assuming it didn't exist yet; that's why I generated it) Open the new config file with sudo nano /etc/X11/xorg.conf and make sure the screen section is configured for 16 bit color depth like this: Section "Screen" Identifier "Screen0" Device "Card0" Monitor "Monitor0" DefaultDepth 16 SubSection "Display" Viewport 0 0 Depth 16 Modes "1024x768" EndSubSection EndSection I can't guarantee those were the only important changes I made - I tried a few things in my attempts to create a valid xorg.conf file. But I'm pretty sure that the screen section was the important part.

    Read the article

  • How to make Firefox file associations consistent with Ubuntu file associations?

    - by wbharding
    This seems to be a pretty commonly Google question, but one for which there are no answers. http://www.linuxquestions.org/questions/linux-software-2/firefox-download-mime-types-378902 http://www.birkit.com/content/kubuntu-linux/internet/firefox/fix-file-associations-in-firefox.html Being three links amongst the many. The gist of what I want to accomplish is to have Firefox understand the file associations I download without me having to manually map all of them myself. Gnome knows the file extensions, so I would have expected that Firefox could just use the already-known file mappings there to open the right stuff (as I presume Chrome does). But it doesn't. At least not for me, using Firefox 4, and not by default. When I click on a downloaded file right now, Firefox always has to ask me what application should be used to open the file. A handful of Google results tell me that I can reassociate my file extensions by deleting ~/.mozilla/firefox/[profile name]/mimeTypes.rdf, but while deleting that file does in fact result in a new mimeTypes file being generated, the new mimeTypes is just as barren as the old one had been. Based on the amount of unanswered Qs on the Googlesphere, I know this is a very common problem for Ubuntu users, but it seems to be one for which nobody has chimed in with a good solution. Maybe Superuser can finally be the panacea for us all?

    Read the article

  • De-duplicating backup tool on a block basis? [closed]

    - by SST
    I am looking for an (ideally free as in speech or beer) backup tool for Unix-like OS which can store deduplicated backups, i.e. only nonredundant content takes up additional space. I already looked at dirvish (my first candidate) and rsnapshot which use hardlinks to achieve deduplication on a per-file level. However, as I want to back up large files (Thunderbird mailboxes 3GB, VMware images 10GB), such file are stored again entirely even if just a few bytes change. Then there are rsync-based tools like rdiff-backup which only store deltas and a current mirror. However, as the deltas are generated against each previous mirror, it is difficult to fine-tune the retention granularity (only keep one backup after a week, etc.) because the deltas would have to be re-evaluated. Another approach is to partition content into blocks and store each block only if it is not stored yet, otherwise just linking it to the first occurrence. The only tool I know of that does this by now is obnam (http://liw.fi/obnam), and it even supports zlib-compression and gpg-encryption -- nice! But it is very slow, AFAICT. Does any one know any other, solid backup software which supports deduplication on a sub-file level, ideally with at least some management options (show/select/delete generations...)?

    Read the article

  • Cant configure DNS properly on centos

    - by Nuker
    I am on a VPS i must manage my own. I have network problems because in the last days many of my users report they cant enter my site from my domain and seems like Google and Facebook cant either (this never happened before). However i can enter my site without problems and so many other people as well. So i tested by making a php include like this <?php include 'http://mysite.com/somepage.php'; ?> and i get this error: Warning: include(): php_network_getaddresses: getaddrinfo failed: Name or service not known in I even tried by including content from yahoo.com or facebook and didnt work either. However the includes will work if i use IPs instead of domains. Do i have a DNS problem or something? What can i do to fix it? Im on a Linux 2.6.32-431.11.2.el6.x86_64 on x86_64 CentOS Linux 6.5 I have this on my resolv.conf # Generated by NetworkManager # No nameservers found; try putting DNS servers into your # ifcfg files in /etc/sysconfig/network-scripts like so: # # DNS1=xxx.xxx.xxx.xxx # DNS2=xxx.xxx.xxx.xxx # DOMAIN=lab.foo.com bar.foo.com nameserver 8.8.8.8 nameserver 8.8.4.4 Thank you.

    Read the article

  • Can I regenerate the rsa key for SSH access to a Cisco router? Or should I completely erase the SSH config?

    - by Josh
    I have a production 2691 that I administer via telnet. I'd like to change that to SSH. Looking at the config, it looks like there have been keys generated in the past. I think the history here is SSH was set up, they had issues connecting, and fell back to telnet. There are a number of crypto entries, including the following: crypto pki trustpoint Gateway-2691.xxx.com enrollment selfsigned subject-name cn=IOS-Gateway-2691.xxx.com revocation-check none rsakeypair Gateway-2691.xxx.com I've also got this going... Gateway-2691#sh ip ssh SSH Disabled - version 1.99 %Please create RSA keys (of atleast 768 bits size) to enable SSH v2. Authentication timeout: 120 secs; Authentication retries: 3 Gateway-2691# My question is simply, can I run crypto key generate rsa again to set it up again? Is there a way to negate or no all of the previous ssh config so that I can start fresh there? I may be asking the wrong questions, as I'm learning here. As for the SSH how-to, I'm sure I can find information in many places. I'm just basically wondering if I need to start fresh, or if I can pick up where the last attempt at SSH config left off.

    Read the article

  • How to find what files / directories are not copied yet?

    - by user8676
    Hi all, I found the following 'nice' situation: An archive of few disks (actually three disks) which has a bunch of photos (more or less) organized. Well, this is good. A big disk shared on a network which has a bunch of photos which has another folder structure (even if is somewhat recognizable for a human being) than the archive described above, but some of the files on this big network share are the same with the files from the archive. Well, this is bad. What we need is to move the different (new) files from the network share in the archive (perhaps we'll use for this a new disk added to archive). The program that we need is different from a regular File Duplicate Finder program because usually the File Duplicate Finder finds the duplicates from all sources comparing each file with another. We want to find the differences between the two sources. It is fine for us to have a report generated in text file which after this we'll use to do our move. A Windows solution will be preferred. Any ideas? TIA

    Read the article

  • Glassfish and SSL [closed]

    - by Richard
    I'm struggling to get SSL working on Glassfish 3.1.1. I've been following tutorials like http://javadude.wordpress.com/2010/04/06/getting-started-with-glassfish-v3-and-ssl/ and SO posts like this Issues with setting up SSL on Glassfish v3 The above links are for information only. I've summarised what I've done below. As far as I can tell I'm doing everything correctly but I'm getting this error: SSL configuration is invalid due to No available certificate or key corresponds to the SSL cipher suites which are enabled Some background of what I have done: My cert is from GoDaddy. I generated the CSR from a new keystore (keystore.jks), then imported the resulting certs back into the same keystore and set the keystore password to the same pwd as the GF master password. Then created a new SSL listener in GF and pointed it at my keystore file (which I copied into domains/domain1/config). Set the Nickname to the alias of my cert (which is something liem 'mydomain.org' i.e. the name that I get when I run keytool -list. In my ciphers section in the network listeners page, I leave the defaults in place (empty, which means all ciphers are available I think). In domain.xml I've replaced all instances of s1as to 'mydomain.org'. This is the question: What exactly is causing the error highlighted? I'm guessing it's a mismatch between my listener config and aliases in my keystore, or something similar, but I'm not really sure what. Thanks

    Read the article

  • Free blog sites where the blogs can (if the template does) validate as XHTML Strict 1.0?

    - by Deleted
    I'm looking for a site where I may register a free blog and have the blog validate as XHTML Strict 1.0. On the surface this may seem like a trivial problem only related to the theme/template in use, but that's unfortunately not the case. One example of a provider which can't fullfill this requirement is Blogger. Altough the pages of the blogs there presents themselfs as XHTML 1.0 Strict it is impossible to actually comply with the requirements inheritied by that markup type in the blog (as the XHTML which is generated by Blogger makes the page as a whole invalid). I've sent a mail to Tumblr to see if it was possible with them, but so far my reply consists of them having forwarded my mail with a "suggestion" to the development department. I don't know if we had a communication error or if I'm actually going to receive a proper answer later. Time will tell. I haven't had time to investigate Tumblr myself, so they may very well be the solution to this problem. To sum things up, I'm looking for any provider: Of a free blog. The blog must have the capability to validate as XHTML Strict 1.0. With capability I mean that the system shouldn't get in the way of creating/using a theme which complies as XHTML Strict 1.0. Preferably is large or at least likely to stay around for a couple of years to come. But I'm willing to take my chances if none of the established providers are up to the task. Thank you for reading! I hope you know of any provider which would be suitable, preferably with proof by linking to a blog there which validates. I'm not looking for suggestions to look into, as there are far to many to investigate and far too little time. If you know of something for sure, I'd be very happy to know about it.

    Read the article

  • Unwanted forced authentication after server restart (Win 2k3)

    - by Felthragar
    We're running a Win 2k3 R2 Standard 64-bit edition server. On this server we're running a fileserver and the ability to allow remote login to our network through vpn. We do not currently utilize a domain setup, all user accounts are local accounts on the server. Each employee is given a unique account to login to the server. The password is a randomly generated 16 character long string, which makes it hard to remember. What we've done is basicly had the password stored on the client machine (standard "Remember Me" functionality). This has worked well. However, last night our server automatically restarted after an automatic update. After that, some of our employees, myself included, had to re-authenticate with the server, submitting our credentials again. Then again, some others did not have to re-authenticate. Do you guys have any idea why this is? Is there a setting to prevent this? I've checked the logs but I couldn't find anything of interest. Then again I'm not really sure what I'm looking for. Thanks in advance, I'll try to answer any additional questions you may have. Edit: When I say "login" or "authenticate" I mean through the standard windows samba protocol. Edit 2: Ok, new day. Tonight the server restarted again, and the same two clients that had to re-authenticate yesterday had to re-authenticate today as well. The rest did not.

    Read the article

  • Enabling http access on port 80 for centos 6.3 from console

    - by Hugo
    Have a centos 6.3 box running on Parallels and I'm trying to open port 80 to be accesible from outside tried the gui solution from this post and it works, but I need to get it done from a script. Tried to do this: sudo /sbin/iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT sudo /sbin/iptables-save sudo /sbin/service iptables restart This creates exactly the same iptables entries as the GUI tool except it does not work: $ telnet xx.xxx.xx.xx 80 Trying xx.xxx.xx.xx... telnet: connect to address xx.xxx.xx.xx: Connection refused telnet: Unable to connect to remote host UPDATE: $ netstat -ntlp (No info could be read for "-p": geteuid()=500 but you should be root.) Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN - tcp 0 0 127.0.0.1:6379 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN - tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN - tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:37439 0.0.0.0:* LISTEN - tcp 0 0 :::111 :::* LISTEN - tcp 0 0 :::22 :::* LISTEN - tcp 0 0 ::1:631 :::* LISTEN - tcp 0 0 :::60472 :::* LISTEN - $ sudo cat /etc/sysconfig/iptables # Generated by iptables-save v1.4.7 on Wed Dec 12 18:04:25 2012 *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [5:640] -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT # Completed on Wed Dec 12 18:04:25 2012

    Read the article

  • Removing HttpModule for specific path in ASP.NET / IIS 7 application?

    - by soccerdad
    Most succinctly, my question is whether an ASP.NET 4.0 app running under IIS 7 integrated mode should be able to honor this portion of my Web.config file: <location path="auth/windows"> <system.webServer> <modules> <remove name="FormsAuthentication"/> </modules> </system.webServer> </location> I'm experimenting with mixed mode authentication (Windows and Forms). Using IIS Manager, I've disabled Anonymous authentication to auth/windows/winauth.aspx, which is within the location path above. I have Failed Request Tracing set up to trace various HTTP status codes, including 302s. When I request the winauth.aspx page, a 302 HTTP status code is returned. If I look at the request trace, I can see that a 401 (unauthorized) was originally generated by the AnonymousAuthenticationModule. However, the FormsAuthenticationModule converts that to a 302, which is what the browser sees. So it seems as though my attempt to remove that module from the pipeline for pages in that path isn't working. But I'm not seeing any complaints anywhere (event viewer, yellow pages of death, etc.) that would indicate it's an invalid configuration. I want the 401 returned to the browser, which presumably would include an appropriate WWW-Authenticate header. A few other points: a) I do have <authentication mode="Forms"> in my Web.config, and that is what the 302 redirects to; b) I got the "name" of the module I'm trying to remove from the inetserv\config\applicationHost.config file; c) I have this element in my Web.config file: <modules runAllManagedModulesForAllRequests="false">; d) I tried a <location> element for the path in which I set the authentication mode to "None", but that gave a yellow exception page that the property can't be set below the application level. Anyone had any luck removing modules in this fashion?

    Read the article

  • Subsequent runs of rsync locally don't reduce data transferred

    - by sharakan
    I have an EC2 instance with data I want to sync to a mounted, but remote, volume, as a backup. rsync seems like the way to go with this, so as a test I took my test file (a Postgres pg_dump file) and used rsync -v to copy it to the mounted volume: [ec2-user work]$ rsync -v dump.sql.1 ../backup/dump.sql dump.sql.1 sent 821704315 bytes received 31 bytes 3416650.09 bytes/sec total size is 821603948 speedup is 1.00 Then, I ran it again, expecting to see minimal sent/received numbers because it would just be checksums. Instead... [ec2-user work]$ rsync -v dump.sql.1 ../backup/dump.sql dump.sql.1 sent 821704315 bytes received 31 bytes 3402502.47 bytes/sec total size is 821603948 speedup is 1.00 I'm new to rsync so perhaps I'm missing something, but isn't the idea that the source and destination files are checked for differences, and then a patch is generated and applied to the destination? Why is this not reducing the amount of data 'sent' to just the size of the checksums? Some background if it's relevant: the mounted volume is using s3fs, mounted with s3fs <bucketname> backup.

    Read the article

  • Windows 7 using llt for ipv6

    - by Seoman
    The question asked below is based on the specific implementations of the Os not the RFC. Looking on a way to be able to assign a fixed ip address to a host, before it boots I found that Centos 6 works fine with no modifications and Windows 7 does not work at all. As defined in enter link description here exists 3 valid ways of generate a DUID: 1 Link-layer address plus time 2 Vendor-assigned unique ID based on Enterprise Number 3 Link-layer address Looking at the centos, that works fine, I can see the following autogenerated DUID: option dhcp6.client-id 0:1:0:1:19:60:25:f1:52:54:0:6b:b9:9e; and the MAC address for this host is: ifconfig eth1 | grep HWaddr eth1 Link encap:Ethernet HWaddr 52:54:00:6B:B9:9E As you can see, the DUID containts the MAC address. I can assign a fixed ip address to this host by including an entry on my dhcp server similar to: host vm { hardware ethernet 52:54:00:6B:B9:9E; fixed-address6 2001:db8:0:1::200; if packet(0,1) = 1 { log(debug,"VM Request match!"); } } And the Centos 6 gets his ip. On the windows side, I faced a common problem explained on this other link enter link description here As summary, Win7 uses the option 2 of the DUID generation or a variation of this one. On the link explains how to move it to a llt (link layer + time) but is not working fine. If I modify the DUID to one that looks like the one generated on Centos (but with the right MAC) it works as expected. Question 1 How Can I change the DUID generation for Windows 7 to be based on MAC as Centos 6 does? Thanks

    Read the article

  • IIS7ASP.Net 4.0 - 404 errors only for external clients

    - by dmcgiv
    recently we moved an ASP.Net 3.5 website to 4.0 (integrated mode) and when we deployed to the clients server (Windows Server 2008 Web edition) we notice that some .aspx pages are serving 404 errors. What is strange is that 1) the pages exist 2) if you browse from the server itself the page is served as normal, only external clients get the 404 3) it's the default 404 error page not the one configured in the web.config 4) it only happens for some .aspx pages, and I've not been able to establish a link between the pages that are not being served externally. We are using a URL rewriter module which I first thought may be at fault but then realised that only some of the failing pages are being rewritten. I've also tested removing the http module and the problem still persists. As everything is working as expected when logged onto the server I was thinking it my be some sort of permission issue, but why would it only affect a few pages? I turned on failed request tracking and the debug files are being generated with the expected 404 error, although at the moment I'm not sure what most of the data means so can't decipher what's going on internally. I'd really appreciate some help with this one.

    Read the article

  • Moving Images from Database to File System

    - by msarchet
    So currently in our system we have been storing image files in the database (SQL Express 2005). Unfortunately it wasn't perceived that this would reach the max database size allowed by the SQL Express License. So I have proposed a plan of storing the images in the file system and only indexing where the file is in the database. The plan is to save the root path in our OptionsTable as something like ImagesRoot and then only saving the actual imageID in the table, which is basically a FK from the PK of the record with the image. I have determined that it would be best to then split this down into sub-directories inside of the ImagesRoot based on every 1000 images so basically (ImagedID / 1000)\(ImageID % 1000) (e.g. ImageID is 1999 it would be in %ImageRoot%\1\999). I'm looking for any potential pitfalls of this system and any thing that could be improved as I am already receiving some resistance from the owner of the company who wants everything to be in databases. Along those lines I would also take reasons why it should all be in databases. I should mention we have in place already automated backups that run for all of our customers databases and any files that are generated by our program that are required to be saved over a period of time These are optional but if someone isn't using our system it is expected that they are using their own or data loss isn't our problem (it is if our system fails and they are using it!). Thanks

    Read the article

  • No outbound internet connection after restarting CentOS 6.3

    - by wnstnsmth
    After restarting a headless CentOS 6.3 machine, it lost outbound internet connectivity, i.e. I can still connect to the server via SSH (ssh root@**.126.18.56), but stuff such as ping google.com gives google.com: unknown host, and yum list some_package gives a lot of network errors. This is what ifconfig gives: eth0 Link encap:Ethernet HWaddr 00:25:90:78:2D:5D inet addr:**.126.18.56 Bcast:**.126.18.255 Mask:255.255.255.0 inet6 addr: fe80::225:90ff:fe78:2d5d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:75594 errors:0 dropped:0 overruns:0 frame:0 TX packets:787 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:7074741 (6.7 MiB) TX bytes:144391 (141.0 KiB) Interrupt:20 Memory:f7a00000-f7a20000 eth1 Link encap:Ethernet HWaddr 00:25:90:78:2D:5C UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) Interrupt:16 Memory:f7900000-f7920000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:6 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:504 (504.0 b) TX bytes:504 (504.0 b) I have absolutely no clue how to debug this, and I find it very strange since I can still connect via ssh. EDIT: Weirdly, /etc/resolv.conf does not contain any entries, or none that I can make sense of: # Generated by NetworkManager search sui-inter.net # No nameservers found; try putting DNS servers into your # ifcfg files in /etc/sysconfig/network-scripts like so: # # DNS1=xxx.xxx.xxx.xxx # DNS2=xxx.xxx.xxx.xxx # DOMAIN=lab.foo.com bar.foo.com So is it possible that rebooting the server erased that file? It worked before at least! And how do I solve this? By the way, pinging an IP address works.

    Read the article

  • Our clients site is redirecting to a pill scammy site [closed]

    - by Alex Demchak
    Possible Duplicate: My server's been hacked EMERGENCY We've usually host our clients site, but we aren't hosting this one. The website itself (weddle-funeral.com) works just fine. if you load google and search for weddle funeral stayton oregon - and click that link, the site links to a scammy pill site. I went through the site and there were some php files in the wordpress plugins that got quarantined by my antivirus. I removed ALL non essential files, and uploaded fresh versions of all the plugins, but it's STILL redirecting from google. I tried logging in to the cpanel (on a virtual private server), and the cpanel flashed a red warning screen The site's security certificate is not trusted! You attempted to reach XXXXX.com, but the server presented a certificate issued by an entity that is not trusted by your computer's operating system. This may mean that the server has generated its own security credentials, which Google Chrome cannot rely on for identity information, or an attacker may be trying to intercept your communications. You should not proceed, especially if you have never seen this warning before for this site. (Keep in mind, that's for the HOSTING accounts CPanel) Is there something in the SERVER probably that's causing the redirect? EDIT: .htaccess file contents # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress

    Read the article

  • Linux: How do I use Munin in cPanel to monitor MySQL?

    - by Continuation
    I have a cPanel server running CentOS 5.5. I want to use Munin to monitor MySQL. I went to: Main >> cPanel >> Manage Plugins and selected "Install and keep updated" for Munin and clicked "Save". I got the usual bunch of status updates about the install. At the end I got: Going to read '/home/.cpan/sources/modules/02packages.details.txt.gz' Database was generated on Wed, 02 Mar 2011 18:28:33 GMT ..........................................................................DONE Going to read '/home/.cpan/sources/modules/03modlist.data.gz' Out of memory! Callback called exit. Done Done Done Process Complete As you can see I got an "Out of memory!" message. But after that it said "Process Complete". Was Munin installed? When I went back to "Manage Plugins" it Munin has a check that against "Install and keep updated". So is everything alright? And how do I use Munin now? How do i configure it to monitor MySQL? Where can I see the results? Thanks.

    Read the article

< Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >