Search Results

Search found 8132 results on 326 pages for 'generated'.

Page 218/326 | < Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >

  • How to find what files / directories are not copied yet?

    - by user8676
    Hi all, I found the following 'nice' situation: An archive of few disks (actually three disks) which has a bunch of photos (more or less) organized. Well, this is good. A big disk shared on a network which has a bunch of photos which has another folder structure (even if is somewhat recognizable for a human being) than the archive described above, but some of the files on this big network share are the same with the files from the archive. Well, this is bad. What we need is to move the different (new) files from the network share in the archive (perhaps we'll use for this a new disk added to archive). The program that we need is different from a regular File Duplicate Finder program because usually the File Duplicate Finder finds the duplicates from all sources comparing each file with another. We want to find the differences between the two sources. It is fine for us to have a report generated in text file which after this we'll use to do our move. A Windows solution will be preferred. Any ideas? TIA

    Read the article

  • Free blog sites where the blogs can (if the template does) validate as XHTML Strict 1.0?

    - by Deleted
    I'm looking for a site where I may register a free blog and have the blog validate as XHTML Strict 1.0. On the surface this may seem like a trivial problem only related to the theme/template in use, but that's unfortunately not the case. One example of a provider which can't fullfill this requirement is Blogger. Altough the pages of the blogs there presents themselfs as XHTML 1.0 Strict it is impossible to actually comply with the requirements inheritied by that markup type in the blog (as the XHTML which is generated by Blogger makes the page as a whole invalid). I've sent a mail to Tumblr to see if it was possible with them, but so far my reply consists of them having forwarded my mail with a "suggestion" to the development department. I don't know if we had a communication error or if I'm actually going to receive a proper answer later. Time will tell. I haven't had time to investigate Tumblr myself, so they may very well be the solution to this problem. To sum things up, I'm looking for any provider: Of a free blog. The blog must have the capability to validate as XHTML Strict 1.0. With capability I mean that the system shouldn't get in the way of creating/using a theme which complies as XHTML Strict 1.0. Preferably is large or at least likely to stay around for a couple of years to come. But I'm willing to take my chances if none of the established providers are up to the task. Thank you for reading! I hope you know of any provider which would be suitable, preferably with proof by linking to a blog there which validates. I'm not looking for suggestions to look into, as there are far to many to investigate and far too little time. If you know of something for sure, I'd be very happy to know about it.

    Read the article

  • Glassfish and SSL [closed]

    - by Richard
    I'm struggling to get SSL working on Glassfish 3.1.1. I've been following tutorials like http://javadude.wordpress.com/2010/04/06/getting-started-with-glassfish-v3-and-ssl/ and SO posts like this Issues with setting up SSL on Glassfish v3 The above links are for information only. I've summarised what I've done below. As far as I can tell I'm doing everything correctly but I'm getting this error: SSL configuration is invalid due to No available certificate or key corresponds to the SSL cipher suites which are enabled Some background of what I have done: My cert is from GoDaddy. I generated the CSR from a new keystore (keystore.jks), then imported the resulting certs back into the same keystore and set the keystore password to the same pwd as the GF master password. Then created a new SSL listener in GF and pointed it at my keystore file (which I copied into domains/domain1/config). Set the Nickname to the alias of my cert (which is something liem 'mydomain.org' i.e. the name that I get when I run keytool -list. In my ciphers section in the network listeners page, I leave the defaults in place (empty, which means all ciphers are available I think). In domain.xml I've replaced all instances of s1as to 'mydomain.org'. This is the question: What exactly is causing the error highlighted? I'm guessing it's a mismatch between my listener config and aliases in my keystore, or something similar, but I'm not really sure what. Thanks

    Read the article

  • Unwanted forced authentication after server restart (Win 2k3)

    - by Felthragar
    We're running a Win 2k3 R2 Standard 64-bit edition server. On this server we're running a fileserver and the ability to allow remote login to our network through vpn. We do not currently utilize a domain setup, all user accounts are local accounts on the server. Each employee is given a unique account to login to the server. The password is a randomly generated 16 character long string, which makes it hard to remember. What we've done is basicly had the password stored on the client machine (standard "Remember Me" functionality). This has worked well. However, last night our server automatically restarted after an automatic update. After that, some of our employees, myself included, had to re-authenticate with the server, submitting our credentials again. Then again, some others did not have to re-authenticate. Do you guys have any idea why this is? Is there a setting to prevent this? I've checked the logs but I couldn't find anything of interest. Then again I'm not really sure what I'm looking for. Thanks in advance, I'll try to answer any additional questions you may have. Edit: When I say "login" or "authenticate" I mean through the standard windows samba protocol. Edit 2: Ok, new day. Tonight the server restarted again, and the same two clients that had to re-authenticate yesterday had to re-authenticate today as well. The rest did not.

    Read the article

  • Enabling http access on port 80 for centos 6.3 from console

    - by Hugo
    Have a centos 6.3 box running on Parallels and I'm trying to open port 80 to be accesible from outside tried the gui solution from this post and it works, but I need to get it done from a script. Tried to do this: sudo /sbin/iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT sudo /sbin/iptables-save sudo /sbin/service iptables restart This creates exactly the same iptables entries as the GUI tool except it does not work: $ telnet xx.xxx.xx.xx 80 Trying xx.xxx.xx.xx... telnet: connect to address xx.xxx.xx.xx: Connection refused telnet: Unable to connect to remote host UPDATE: $ netstat -ntlp (No info could be read for "-p": geteuid()=500 but you should be root.) Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN - tcp 0 0 127.0.0.1:6379 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN - tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN - tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:37439 0.0.0.0:* LISTEN - tcp 0 0 :::111 :::* LISTEN - tcp 0 0 :::22 :::* LISTEN - tcp 0 0 ::1:631 :::* LISTEN - tcp 0 0 :::60472 :::* LISTEN - $ sudo cat /etc/sysconfig/iptables # Generated by iptables-save v1.4.7 on Wed Dec 12 18:04:25 2012 *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [5:640] -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT # Completed on Wed Dec 12 18:04:25 2012

    Read the article

  • Removing HttpModule for specific path in ASP.NET / IIS 7 application?

    - by soccerdad
    Most succinctly, my question is whether an ASP.NET 4.0 app running under IIS 7 integrated mode should be able to honor this portion of my Web.config file: <location path="auth/windows"> <system.webServer> <modules> <remove name="FormsAuthentication"/> </modules> </system.webServer> </location> I'm experimenting with mixed mode authentication (Windows and Forms). Using IIS Manager, I've disabled Anonymous authentication to auth/windows/winauth.aspx, which is within the location path above. I have Failed Request Tracing set up to trace various HTTP status codes, including 302s. When I request the winauth.aspx page, a 302 HTTP status code is returned. If I look at the request trace, I can see that a 401 (unauthorized) was originally generated by the AnonymousAuthenticationModule. However, the FormsAuthenticationModule converts that to a 302, which is what the browser sees. So it seems as though my attempt to remove that module from the pipeline for pages in that path isn't working. But I'm not seeing any complaints anywhere (event viewer, yellow pages of death, etc.) that would indicate it's an invalid configuration. I want the 401 returned to the browser, which presumably would include an appropriate WWW-Authenticate header. A few other points: a) I do have <authentication mode="Forms"> in my Web.config, and that is what the 302 redirects to; b) I got the "name" of the module I'm trying to remove from the inetserv\config\applicationHost.config file; c) I have this element in my Web.config file: <modules runAllManagedModulesForAllRequests="false">; d) I tried a <location> element for the path in which I set the authentication mode to "None", but that gave a yellow exception page that the property can't be set below the application level. Anyone had any luck removing modules in this fashion?

    Read the article

  • Windows 7 using llt for ipv6

    - by Seoman
    The question asked below is based on the specific implementations of the Os not the RFC. Looking on a way to be able to assign a fixed ip address to a host, before it boots I found that Centos 6 works fine with no modifications and Windows 7 does not work at all. As defined in enter link description here exists 3 valid ways of generate a DUID: 1 Link-layer address plus time 2 Vendor-assigned unique ID based on Enterprise Number 3 Link-layer address Looking at the centos, that works fine, I can see the following autogenerated DUID: option dhcp6.client-id 0:1:0:1:19:60:25:f1:52:54:0:6b:b9:9e; and the MAC address for this host is: ifconfig eth1 | grep HWaddr eth1 Link encap:Ethernet HWaddr 52:54:00:6B:B9:9E As you can see, the DUID containts the MAC address. I can assign a fixed ip address to this host by including an entry on my dhcp server similar to: host vm { hardware ethernet 52:54:00:6B:B9:9E; fixed-address6 2001:db8:0:1::200; if packet(0,1) = 1 { log(debug,"VM Request match!"); } } And the Centos 6 gets his ip. On the windows side, I faced a common problem explained on this other link enter link description here As summary, Win7 uses the option 2 of the DUID generation or a variation of this one. On the link explains how to move it to a llt (link layer + time) but is not working fine. If I modify the DUID to one that looks like the one generated on Centos (but with the right MAC) it works as expected. Question 1 How Can I change the DUID generation for Windows 7 to be based on MAC as Centos 6 does? Thanks

    Read the article

  • How to flash Dell Precision 390 from linux (debian)

    - by malat
    I am trying to update my BIOS: $ sudo dmidecode -s bios-version 2.1.2 With a newer one: 2.6.0. I went to this page Dell Precision System BIOS, 2.6.0 After downloading the file WS390-020600.BIN, here is what it states: $ ./WS390-020600.BIN --help Usage: WS390-020600.BIN [options] Options: --help Print this text. --version Print package versions. If no options, update the BIOS. and $ ./WS390-020600.BIN --version Dell BIOS Update Installer 1.2 Copyright 2006 Dell Inc. All Rights Reserved. ./WS390-020600.BIN: 60: ./WS390-020600.BIN: ./flash: not found Does anyone knows where this flash command can be found ? Update: it looks like this is a self-extracting archive (need bash as per comment in header). $ head -30 WS390-020600.BIN [...] Extract() { tail -n +`awk '/^__ARC__/ { print NR + 1; exit 0; }' $0` $0 | gzip -cd >$_PRG So the flash command should have been auto-generated, however the above command does not appear to be running as original author intended. I do not see anything wrong with the command though.

    Read the article

  • IIS7ASP.Net 4.0 - 404 errors only for external clients

    - by dmcgiv
    recently we moved an ASP.Net 3.5 website to 4.0 (integrated mode) and when we deployed to the clients server (Windows Server 2008 Web edition) we notice that some .aspx pages are serving 404 errors. What is strange is that 1) the pages exist 2) if you browse from the server itself the page is served as normal, only external clients get the 404 3) it's the default 404 error page not the one configured in the web.config 4) it only happens for some .aspx pages, and I've not been able to establish a link between the pages that are not being served externally. We are using a URL rewriter module which I first thought may be at fault but then realised that only some of the failing pages are being rewritten. I've also tested removing the http module and the problem still persists. As everything is working as expected when logged onto the server I was thinking it my be some sort of permission issue, but why would it only affect a few pages? I turned on failed request tracking and the debug files are being generated with the expected 404 error, although at the moment I'm not sure what most of the data means so can't decipher what's going on internally. I'd really appreciate some help with this one.

    Read the article

  • Moving Images from Database to File System

    - by msarchet
    So currently in our system we have been storing image files in the database (SQL Express 2005). Unfortunately it wasn't perceived that this would reach the max database size allowed by the SQL Express License. So I have proposed a plan of storing the images in the file system and only indexing where the file is in the database. The plan is to save the root path in our OptionsTable as something like ImagesRoot and then only saving the actual imageID in the table, which is basically a FK from the PK of the record with the image. I have determined that it would be best to then split this down into sub-directories inside of the ImagesRoot based on every 1000 images so basically (ImagedID / 1000)\(ImageID % 1000) (e.g. ImageID is 1999 it would be in %ImageRoot%\1\999). I'm looking for any potential pitfalls of this system and any thing that could be improved as I am already receiving some resistance from the owner of the company who wants everything to be in databases. Along those lines I would also take reasons why it should all be in databases. I should mention we have in place already automated backups that run for all of our customers databases and any files that are generated by our program that are required to be saved over a period of time These are optional but if someone isn't using our system it is expected that they are using their own or data loss isn't our problem (it is if our system fails and they are using it!). Thanks

    Read the article

  • Subsequent runs of rsync locally don't reduce data transferred

    - by sharakan
    I have an EC2 instance with data I want to sync to a mounted, but remote, volume, as a backup. rsync seems like the way to go with this, so as a test I took my test file (a Postgres pg_dump file) and used rsync -v to copy it to the mounted volume: [ec2-user work]$ rsync -v dump.sql.1 ../backup/dump.sql dump.sql.1 sent 821704315 bytes received 31 bytes 3416650.09 bytes/sec total size is 821603948 speedup is 1.00 Then, I ran it again, expecting to see minimal sent/received numbers because it would just be checksums. Instead... [ec2-user work]$ rsync -v dump.sql.1 ../backup/dump.sql dump.sql.1 sent 821704315 bytes received 31 bytes 3402502.47 bytes/sec total size is 821603948 speedup is 1.00 I'm new to rsync so perhaps I'm missing something, but isn't the idea that the source and destination files are checked for differences, and then a patch is generated and applied to the destination? Why is this not reducing the amount of data 'sent' to just the size of the checksums? Some background if it's relevant: the mounted volume is using s3fs, mounted with s3fs <bucketname> backup.

    Read the article

  • No outbound internet connection after restarting CentOS 6.3

    - by wnstnsmth
    After restarting a headless CentOS 6.3 machine, it lost outbound internet connectivity, i.e. I can still connect to the server via SSH (ssh root@**.126.18.56), but stuff such as ping google.com gives google.com: unknown host, and yum list some_package gives a lot of network errors. This is what ifconfig gives: eth0 Link encap:Ethernet HWaddr 00:25:90:78:2D:5D inet addr:**.126.18.56 Bcast:**.126.18.255 Mask:255.255.255.0 inet6 addr: fe80::225:90ff:fe78:2d5d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:75594 errors:0 dropped:0 overruns:0 frame:0 TX packets:787 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:7074741 (6.7 MiB) TX bytes:144391 (141.0 KiB) Interrupt:20 Memory:f7a00000-f7a20000 eth1 Link encap:Ethernet HWaddr 00:25:90:78:2D:5C UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) Interrupt:16 Memory:f7900000-f7920000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:6 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:504 (504.0 b) TX bytes:504 (504.0 b) I have absolutely no clue how to debug this, and I find it very strange since I can still connect via ssh. EDIT: Weirdly, /etc/resolv.conf does not contain any entries, or none that I can make sense of: # Generated by NetworkManager search sui-inter.net # No nameservers found; try putting DNS servers into your # ifcfg files in /etc/sysconfig/network-scripts like so: # # DNS1=xxx.xxx.xxx.xxx # DNS2=xxx.xxx.xxx.xxx # DOMAIN=lab.foo.com bar.foo.com So is it possible that rebooting the server erased that file? It worked before at least! And how do I solve this? By the way, pinging an IP address works.

    Read the article

  • Our clients site is redirecting to a pill scammy site [closed]

    - by Alex Demchak
    Possible Duplicate: My server's been hacked EMERGENCY We've usually host our clients site, but we aren't hosting this one. The website itself (weddle-funeral.com) works just fine. if you load google and search for weddle funeral stayton oregon - and click that link, the site links to a scammy pill site. I went through the site and there were some php files in the wordpress plugins that got quarantined by my antivirus. I removed ALL non essential files, and uploaded fresh versions of all the plugins, but it's STILL redirecting from google. I tried logging in to the cpanel (on a virtual private server), and the cpanel flashed a red warning screen The site's security certificate is not trusted! You attempted to reach XXXXX.com, but the server presented a certificate issued by an entity that is not trusted by your computer's operating system. This may mean that the server has generated its own security credentials, which Google Chrome cannot rely on for identity information, or an attacker may be trying to intercept your communications. You should not proceed, especially if you have never seen this warning before for this site. (Keep in mind, that's for the HOSTING accounts CPanel) Is there something in the SERVER probably that's causing the redirect? EDIT: .htaccess file contents # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress

    Read the article

  • Linux: How do I use Munin in cPanel to monitor MySQL?

    - by Continuation
    I have a cPanel server running CentOS 5.5. I want to use Munin to monitor MySQL. I went to: Main >> cPanel >> Manage Plugins and selected "Install and keep updated" for Munin and clicked "Save". I got the usual bunch of status updates about the install. At the end I got: Going to read '/home/.cpan/sources/modules/02packages.details.txt.gz' Database was generated on Wed, 02 Mar 2011 18:28:33 GMT ..........................................................................DONE Going to read '/home/.cpan/sources/modules/03modlist.data.gz' Out of memory! Callback called exit. Done Done Done Process Complete As you can see I got an "Out of memory!" message. But after that it said "Process Complete". Was Munin installed? When I went back to "Manage Plugins" it Munin has a check that against "Install and keep updated". So is everything alright? And how do I use Munin now? How do i configure it to monitor MySQL? Where can I see the results? Thanks.

    Read the article

  • Strange Behaviour with Unicode Characters in Windows

    - by open_sourse
    Ok, I do not know if this is a programming question, but it certainly is a technical one so I am asking it here. I was working on some internationalization stuff in my PHP code, and in order to ensure that my generated HTML shows up Unicode correctly based on the encoding and stuff I decided to add some Chinese text to my PHP page, which then echoes it into the browser to complete my test case. So I went into google and typed "Chinese", copied the first Chinese text that the search returned (which was ??/??). I then copied it into Notepad++ which is my editor, and to my surprise showed up as boxes similar to [][]/[][]. So I thought the encoding in Notepad++ was messed up and I changed the encoding to UTF-8 and UCS, neither worked. I did it fresh in a newly encoded file, still I got the boxes. The same content when I paste into Google and StackOverFlow (like I did in this posting) shows up correct Chinese! I even opened up Windows Clipboard Viewer and the content is represented in the Clipboard as boxes! I tried pasting it into Windows Explorer address bar and using to rename a file to, but I still get boxes. But it shows up correctly when pasted into my Chrome Browser address bar! Is this a Windows issue? Since I am able to paste it correctly in SO, the data in memory should be encoded correctly right? But if that is the case why does it show up as boxes in the Clipboard Viewer? I am confused here...By the way I am using Windows XP with SP3. (I am asking this question here, even if it is not programmatic, because it is preventing me from running my programming test cases..)

    Read the article

  • Reverse web proxy with time constraints

    - by user2893458
    I have a web application which produces several unique URLs of the type http://service.company.com/service.html?type=aaaa&key=jfiZm6u6cW where the last part is a randomly generated key. Each such URL provides access to an instance of the service provided. I am looking for a way to restrict access to those URLs based on time constraints, as an example URL#1 should be available between 8:00AM and 10:00AM on May 30, URL#2 should be available between 10:30AM and 12:00PM on May 31, and so on. I already have a resource scheduling application based on Drupal and would like to find a way to include those URLs as scheduled resources. The web application is deployed on Apache Tomcat, so I don't have the knowledge or the resources to alter it, therefore I thought that I could put some sort of reverse proxy in front of the web app that could implement the time constraint feature. In my thoughts the reverse proxy would allow or disallow access to each URL based on the rules that my scheduling application would provide. There may be other ways to deliver such a solution, but I can't think of anything better, so the question is: is there a reverse web proxy architecture that could allow access to the destination URLs based on time and date rules? Any other ideas are more than welcome.

    Read the article

  • Windows Server 2008 R2 creating a multi-year client certificate using the IIS certsrv page while deploying SSTP VPN

    - by Warren P
    I am trying to follow instructions on Technet about deploying a Standard (non-enterprise) SSTP based VPN) that were originally written for Server 2008, but I am using Server 2008 R2, I have gotten as far as the part where it asks you to create a request a Server Authentication certificate. I have deployed IIS, and Active Directory Certificate Services, and chose "Standalone" and "Standard" (non-enterprise) Certificate Authority because I don't have an OID and don't think I should have to get one for a simple deployment of SSTP. The resulting certificates made by the Certification Authority "Issue" command, only have a 1 year period of validity, I want a multi-year certificate. At no point in this process is there any way to input this information unless it's through the Attributes text input area on the Advance Certificate Request page, which appears to be generated using an old ActiveX control, which means I can only do this using the workarounds in the article that I linked at the top, and only using Internet Explorer. Update:: It may be that this question is pointless since self-signed keys do not appear to work, when I try them, using Windows 8 as the VPN client. The problem is that the keys that are self-created by the technique shown here do not have any Certificate Revocation Server URLs and so you get an error "The revocation function was unable to check revocation", and the VPN connection fails.

    Read the article

  • Redhat Linux password fail on ssh

    - by Stephopolis
    I am trying to ssh into my linux machine from my mac. If I am physically at the machine I can log in with my password just fine, but if I am sshing it refuses. I am getting: Permission denies (publickey,keyboard-interactive) I thought that it might be caused by some changes that I recently made to system-auth, but I restored everything to what I believe was the original format: #%PAM-1.0 # This file is auto-generated. # User changes will be destroyed the next time authconfig is run. auth required pam_env.so auth sufficient pam_fprintd.so auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 500 quiet auth required pam_deny.so account required pam_unix.so account sufficient pam_localuser.so account sufficient pam_succeed_if.so uid < 500 quiet account required pam_permit.so password requisite pam_cracklib.so try_first_pass retry=3 password sufficient pam_unix.so md5 shadow nullok try_first_pass use_authtok password required pam_deny.so session optional pam_keyinit.so revoke session required pam_limits.so session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid session required pam_unix.so But I still could not ssh in. I tried removing my password all together and that didn't seem to help either. It still asks and even entering an empty string (nothing) it still fails me out. Any advice?

    Read the article

  • Exchange 2003 inbound routing issue

    - by user565712
    Just recently we started experiencing inbound routing issues. Email adddressed to [email protected] is intermittantly translated to [email protected]. This is happening for several users and, as stated, is intermittant. I don't know where to start looking for the solution. Is this an Exchange issue? A DNS issue? We have a single Exchange server inside our network with an FQDN of server.domain.local with a single SMTP Virtual Server. The Advanced properties of the Delivery tab of the Virt Server has an empty Masquerade Domain textbox and the value for the FDQN text-box is set to the domain itself, domain.com. The DNS record for domain.com is a CNAME entry referencing www.domain.com. Is this somehow related to the problem? I checked the headers of the inbound messages that generated NDRs as a result of being sent to [email protected] and nowhere in the header is www.domain.com mentioned. To make my life even more difficult, we use Postini as a third-party SPAM filtering service. Our MX records point to the Postini servers and Postini delivers the messages to our server. Perhaps it is Postini that is mucking things up? sigh I'm having trouble with this one and the intermittent aspect is making it that much more difficult for me. Any ideas?

    Read the article

  • Paranormal activity in My Pictures folder: Thumbnail doesn't match actual picture.

    - by Sam152
    After finding an amusing picture on a popular imageboard, I decided to save it. A few days past and I was browsing my images folder when I realised that the thumbnail generated by Windows XP in the thumbnails view did not match the actual image. Here is a comparison image: What's even stranger in this situation is that the parts of the photograph that are different have actually been replaced with what might be the correct background. Furthermore, it is a jpeg (no PNG transparency tricks) that is 343 kilobytes but only 847x847 pixels wide. What could be going on here? Could there be anything malicious in the works, or hidden data? Before anyone asks, I have checked and preformed the following: Deleted Thumbs.db to reload thumbnails. Opened image in different editors. (they appear with the text) Moved image to a different directory. Changed the extension to .rar. All these steps produce the same results. Pre actual posting update: It seems that opening the image in paint, changing the image entirely (deleting entire contents and making a red fill) will still generate the original thumbnail, even after deleting Thumbs.db etc. I'm also hesitant to post the original data, in case there is something malicious or hidden that could be potentially illegal. (Although it would be very beneficial to see if it works on other computers and not just my own).

    Read the article

  • Let varnish send old data from cache while it's fetching a new one?

    - by mark
    I'm caching dynamically generated pages (PHP-FPM, NGINX) and have varnish in front of them, this works very well. However, once the cache timeout is reached, I see this: new client requests page varnish recognizes the cache timeout client waits varnish fetches new page from backend varnish delivers new page to the client (and has page cached, too, for the next request which gets it instantly) What I would like to do is: client requests page varnish recognizes the timeout varnish delivers old page to the client varnish fetches new page from backend and puts it into the cache In my case it's not site where outdated information is such a big problem, especially not when we're talking about cache timeout from a few minutes. However, I don't want punish user to wait in line and rather deliver something immediate. Is that possible in some way? To illustrate, here's a sample output of running siege 5 minutes against my server which was configured to cache for one minute: HTTP/1.1,200, 1.97, 12710,/,1,2013-06-24 00:21:06 ... HTTP/1.1,200, 1.88, 12710,/,1,2013-06-24 00:21:20 ... HTTP/1.1,200, 1.93, 12710,/,1,2013-06-24 00:22:08 ... HTTP/1.1,200, 1.89, 12710,/,1,2013-06-24 00:22:22 ... HTTP/1.1,200, 1.94, 12710,/,1,2013-06-24 00:23:10 ... HTTP/1.1,200, 1.91, 12709,/,1,2013-06-24 00:23:23 ... HTTP/1.1,200, 1.93, 12710,/,1,2013-06-24 00:24:12 ... I left out the hundreds of requests running in 0.02 or so. But it still concerns me that there are going to be users having to wait almost 2 seconds for their raw HTML. Can't we do any better here? (I came across Varnish send while cache , it sounded similar but not exactly what I'm trying to do.)

    Read the article

  • Urgent: how to deny read access to a ExecCGI directory

    - by Malvolio
    First, I can't believe that that isn't the default behavior. Second, yikes! I don't know how long my code's been hanging out there, with all sort of cool secret stuff, just waiting for some hacker who knows Apache better than I do. EDIT (and apology) Well, this is sort of embarrassing. Here's what happened: We had some Python scripts available to the web, at /aux/file.py, which were not surprisingly at /var/www/http/aux . Separately, we were running an app server and Apache proxies through at /servlets/. A contractor had constructed the WAR file by bundling up all the generated files including the Python files (which are in a directory also called aux, not surprisingly), so if you typed in /servlets/aux/file.py, the web-server would ask the app-server for it and the app-server would just supply the file. It was the latter URL that this morning I happened to type in by accident and lo, the source appeared. Until I realized the shear unlikelihood of what I had done, the situation was rating about 8.3 on the sphincter scale. After a tense half-hour or so I realized that it had nothing to do with the CGI (and that serving files that were also executable would be not only foolish but also impossible), and was able to address the real problems. So -- sorry, everybody. Let the scorn-fest commence.

    Read the article

  • Dump Trac DB on Windows/XAMPP

    - by Whiteknight
    I have a Trac instance running on a WindowsXP machine with XAMPP. I am trying to migrate the trac instance to a newer Linux-based machine. However, I'm having a hard time getting the database to cooperate. I try to dump the db with this command: sqlite3 C:\tracroot\db\trac.db ".dump" >> mysqldump.sql But the generated file is mostly empty: BEGIN TRANSACTION; COMMIT; So that's not right. For the record my trac instance is running now and appears to have full access to all the contents of the DB. But sqlite3 (located in C:\xampp\apache\bin) can't seem to get any information from the file. The DB file itself has the header "SQLite format 3", so that seems to be correct. I need to know one of two things: How to get this dump working OR An alternate way to migrate the Trac database to the new machine. Update: When I try to open the .db file in sqlite3, I get the error Error: unsupported file format. What format is it in, and why is it unsupported?

    Read the article

  • My server cant resolve domains?

    - by Nuker
    I am on a VPS that is pretty much unmanaged so it means im on my own. I did my best to configure it so i can host my own site for other people to see it online but seems like i have network problems because in the last days many of my users report they cant enter my site from my domain and seems like Google and Facebook cant either (this never happened before). Its weird because i can enter my site without problems and so many other people as well. But then i tried to make a php include and i get this error: Warning: include(): php_network_getaddresses: getaddrinfo failed: Name or service not known in I was told that seems like my server cant resolve domains. The includes work if i use IPs instead of domains. So it means i have a DNS problem or something? What can i do to fix it? Im on a Linux 2.6.32-431.11.2.el6.x86_64 on x86_64 CentOS Linux 6.5 Thank you. EDIT: i have this on my resolv.conf # Generated by NetworkManager # No nameservers found; try putting DNS servers into your # ifcfg files in /etc/sysconfig/network-scripts like so: # # DNS1=xxx.xxx.xxx.xxx # DNS2=xxx.xxx.xxx.xxx # DOMAIN=lab.foo.com bar.foo.com nameserver 8.8.8.8 nameserver 8.8.4.4

    Read the article

  • How many guesses per second are possible against an encrypted disk? [closed]

    - by HappyDeveloper
    I understand that guesses per second depends on the hardware and the encryption algorithm, so I don't expect an absolute number as answer. For example, with an average machine you can make a lot (thousands?) of guesses per second for a hash created with a single md5 round, because md5 is fast, making brute force and dictionary attacks a real danger for most passwords. But if instead you use bcrypt with enough rounds, you can slow the attack down to 1 guess per second, for example. 1) So how does disk encryption usually work? This is how I imagine it, tell me if it is close to reality: When I enter the passphrase, it is hashed with a slow algorithm to generate a key (always the same?). Because this is slow, brute force is not a good approach to break it. Then, with the generated key, the disk is unencrypted on the fly very fast, so there is not a significant performance lose. 2) How can I test this with my own machine? I want to calculate the guesses per second my machine can make. 3) How many guesses per second are possible against an encrypted disk with the fastest PC ever so far?

    Read the article

< Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >