Search Results

Search found 26427 results on 1058 pages for 'google scripts'.

Page 443/1058 | < Previous Page | 439 440 441 442 443 444 445 446 447 448 449 450  | Next Page >

  • sendmail is using return-path instead of from address

    - by magd1
    I have a customer that is complaining about emails marked as spam. I'm looking at the header. It shows the correct From: [email protected] However, it doesn't like the return-path. Return-Path: <[email protected]> Received-SPF: neutral (google.com: x.x.x.x is neither permitted nor denied by domain of [email protected]) client-ip=x.x.x.x; Authentication-Results: mx.google.com; spf=neutral (google.com: x.x.x.x is neither permitted nor denied by domain of [email protected]) [email protected] How do I configure sendmail to use the From address for the Return-Path?

    Read the article

  • Recovering a lost website with no backup?

    - by Jeff Atwood
    Unfortunately, our hosting provider experienced 100% data loss, so I've lost all content for two hosted blog websites: http://blog.stackoverflow.com http://www.codinghorror.com (Yes, yes, I absolutely should have done complete offsite backups. Unfortunately, all my backups were on the server itself. So save the lecture; you're 100% absolutely right, but that doesn't help me at the moment. Let's stay focused on the question here!) I am beginning the slow, painful process of recovering the website from web crawler caches. There are a few automated tools for recovering a website from internet web spider (Yahoo, Bing, Google, etc.) caches, like Warrick, but I had some bad results using this: My IP address was quickly banned from Google for using it I get lots of 500 and 503 errors and "waiting 5 minutes…" Ultimately, I can recover the text content faster by hand I've had much better luck by using a list of all blog posts, clicking through to the Google cache and saving each individual file as HTML. While there are a lot of blog posts, there aren't that many, and I figure I deserve some self-flagellation for not having a better backup strategy. Anyway, the important thing is that I've had good luck getting the blog post text this way, and I am definitely able to get the text of the web pages out of the Internet caches. Based on what I've done so far, I am confident I can recover all the lost blog post text and comments. However, the images that go with each blog post are proving…more difficult. Any general tips for recovering website pages from Internet caches, and in particular, places to recover archived images from website pages? (And, again, please, no backup lectures. You're totally, completely, utterly right! But being right isn't solving my immediate problem… Unless you have a time machine…)

    Read the article

  • How should I setup separate mx records for a subdomain?

    - by Chris Adams
    Lets say I have a domain that I run a web app on, for example cranketywidgets.com, and I'm using google apps for handle email for people work work on that domain, i.e. support@ cranketywidgets.com, [email protected], [email protected] and so on. Google's own mail services aren't always the best for sending automated reminder emails, comment notifications and so on, so the current solution I plan to pursue is to create a separate subdomain called mailer.cranketywidgets.com, run a mail server off it, and create a few accounts specifically for sending these kinds of emails. What should the mx records and a records look like here for this? I'm somewhat confused by the fact that mx records can be names, but that they must eventually resolve to an A record. What should the records look like here? cranketywidgets.com - A record to actual server like 10.24.233.214 cranketywidgets.com - mx records for google's email apps mailer.cranketywidgets.com - mx name pointing to server's ip address Would greatly appeciate some help on this - the answer seems like it'll be obvious, but email spam is a difficult problem to solve.

    Read the article

  • SQL Constraints &ndash; CHECK and NOCHECK

    - by David Turner
    One performance issue i faced at a recent project was with the way that our constraints were being managed, we were using Subsonic as our ORM, and it has a useful tool for generating your ORM code called SubStage – once configured, you can regenerate your DAL code easily based on your database schema, and it can even be integrated into your build as a pre-build event if you want to do this.  SubStage also offers the useful feature of being able to generate DDL scripts for your entire database, and can script your data for you too. The problem came when we decided to use the generate scripts feature to migrate the database onto a test database instance – it turns out that the DDL scripts that it generates include the WITH NOCHECK option, so when we executed them on the test instance, and performed some testing, we found that performance wasn’t as expected. A constraint can be disabled, enabled but not trusted, or enabled and trusted.  When it is disabled, data can be inserted that violates the constraint because it is not being enforced, this is useful for bulk load scenarios where performance is important.  So what does it mean to say that a constraint is trusted or not trusted?  Well this refers to the SQL Server Query Optimizer, and whether it trusts that the constraint is valid.  If it trusts the constraint then it doesn’t check it is valid when executing a query, so the query can be executed much faster. Here is an example base in this article on TechNet, here we create two tables with a Foreign Key constraint between them, and add a single row to each.  We then query the tables: 1 DROP TABLE t2 2 DROP TABLE t1 3 GO 4 5 CREATE TABLE t1(col1 int NOT NULL PRIMARY KEY) 6 CREATE TABLE t2(col1 int NOT NULL) 7 8 ALTER TABLE t2 WITH CHECK ADD CONSTRAINT fk_t2_t1 FOREIGN KEY(col1) 9 REFERENCES t1(col1) 10 11 INSERT INTO t1 VALUES(1) 12 INSERT INTO t2 VALUES(1) 13 GO14 15 SELECT COUNT(*) FROM t2 16 WHERE EXISTS17 (SELECT *18 FROM t1 19 WHERE t1.col1 = t2.col1) This all works fine, and in this scenario the constraint is enabled and trusted.  We can verify this by executing the following SQL to query the ‘is_disabled’ and ‘is_not_trusted’ properties: 1 select name, is_disabled, is_not_trusted from sys.foreign_keys This gives the following result: We can disable the constraint using this SQL: 1 alter table t2 NOCHECK CONSTRAINT fk_t2_t1 And when we query the constraints again, we see that the constraint is disabled and not trusted: So the constraint won’t be enforced and we can insert data into the table t2 that doesn’t match the data in t1, but we don’t want to do this, so we can enable the constraint again using this SQL: 1 alter table t2 CHECK CONSTRAINT fk_t2_t1 But when we query the constraints again, we see that the constraint is enabled, but it is still not trusted: This means that the optimizer will check the constraint each time a query is executed over it, which will impact the performance of the query, and this is definitely not what we want, so we need to make the constraint trusted by the optimizer again.  First we should check that our constraints haven’t been violated, which we can do by running DBCC: 1 DBCC CHECKCONSTRAINTS (t2) Hopefully you see the following message indicating that DBCC completed without finding any violations of your constraint: Having verified that the constraint was not violated while it was disabled, we can simply execute the following SQL:   1 alter table t2 WITH CHECK CHECK CONSTRAINT fk_t2_t1 At first glance this looks like it must be a typo to have the keyword CHECK repeated twice in succession, but it is the correct syntax and when we query the constraints properties, we find that it is now trusted again: To fix our specific problem, we created a script that checked all constraints on our tables, using the following syntax: 1 ALTER TABLE t2 WITH CHECK CHECK CONSTRAINT ALL

    Read the article

  • Icinga notifications are being marked as spam when sent to my mailbox

    - by user784637
    I'm using gmail and my domain is foo.com About half the notifications from my icinga server, [email protected] go to my spam folder for [email protected] Received-SPF: fail (google.com: domain of [email protected] does not designate <ip6> as permitted sender) client-ip=<ip6>; Authentication-Results: mx.google.com; spf=hardfail (google.com: domain of [email protected] does not designate <ip6> as permitted sender) [email protected] Is my current SPF record set up to allow my icinga server with the ip <ip4> and <ip6> to send email from the domain foo.com? ;; ANSWER SECTION: foo.com. 300 IN TXT "v=spf1 ip4:<ip4> ip6:<ip6> -all"

    Read the article

  • Why do some people hate Dart? [closed]

    - by Hassan
    First, I'd like to note that this question is not intended to compare two languages or technologies, but is only asking about criticisms aimed at a language. I've always thought it a good idea to somehow get rid of Javascript. It works, but it's just so messy. I think many will agree with me there. And that's how I interpreted Google's release of Dart. It seems to me like a very good alternative to Javascript. Now, it looks like some are not very happy that Google has released this new language. Take a look at this Wikipedia page to see what I'm talking about. If you don't feel like reading it, I'll tell you now that some seem to think that Dart is similar to Microsoft's VBScript, in that it only works on Microsoft's browsers. This goes against the web's openness. But it's my understanding that Dart can be compiled to Javascript, which will allow it to be run on any modern browser (as the Wikipedia article also states). So my question is: are these criticisms valid? Is there a real fear that Google is trying to control the web's front-end to be more compatible with its browser?

    Read the article

  • Administer, manage, monitor, and fine tune the performance of your Oracle SOA Suite 11g Service Infrastructure and SOA composite applications.

    - by JuergenKress
    Key Features of the book If you are an Oracle SOA suite administrator, then this book is your bible. It gives you everything you need to know about all your tasks and help you to apply what you learn in your everyday life right from the first chapter. The book walks through promoting code across environments, performance tuning the service infrastructure, monitoring the environment, configuring security policies, managing the dehydration store, backing and restoring environments and so on. Packed with real-world examples from authors' own experiences, this books offers a unique insight into Oracle SOA Suite Administration. Detailed description The book begins with an introduction of SOA and quickly moves on to management of SOA composite applications. Readers will learn how to manage composite applications, their deployments and lifecycles. Equipped with this knowledge, readers will be introduced to monitoring and performance tuning SOA Suite, monitoring instances, messages, and composite applications, managing faults and exceptions, configuring audit levels of composite applications to include end-to-end monitoring through the use of extended logging as well as administering and configuring all SOA Suite components. A very important aspect of administration is tuning and optimizing the infrastructure for performance and book offers real work recommendations to monitor and performance tune service engines, the underlying WebLogic server, threads and timeouts, files systems, and composite applications. It also covers detailed administration of individual service components, configuring the infrastructure MBeans using both Oracle Enterprise Manager Fusion Middleware Control and WLST based scripts, migrating worklist preferences and BAM data across environments, setting up Email, LDAP and custom XPath. An administrator is always trusted with troubleshooting and root causing problems in the infrastructure and this book will help you through the troubleshooting approaches as how to identify faults and exception through extended logging and thread dumps and find solutions to common startup problems and deployment issues. The advanced contents of this book explains OWSM security framework and how to secure components deployed to the infrastructure along with the details of all groundwork needed to ready the environment. Last few chapters help you to understand and deal with managing the metadata services repository and dehydration store, backup and recovery and concluding with advanced topics such as silent/scripted installations, cloning, upgrading, patching and high availability installations. Packed with real-world examples, and tips straight from the trench; this book offers insights into SOA Suite administration that you will not find elsewhere. Part of our writing style in this book draws heavily on the philosophy of reuse and as such the book provide an ample of executable SQL queries and WLST scripts that administrators can reuse and extend to perform most of the administration tasks such as monitoring instances, processing times, instance states and perform automatic deployments, tuning, migration, and installation. These scripts are spread over each of the chapters in the book and can also be downloaded from here. The book is available in different formats at the following websites: Paperback and eBook versions & Kindle version. It is available for order and signed copies are available through our web site. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA book,SOA Suite Adminsitration,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • How to change HTTP_REFERER using perl?

    - by zuqqhi2
    I tried to change log format and change HTTP_REFERER using perl to change browser's referrer like below. [pattern1] Log Format : %{HTTP_REFERER}o perl : $ENV{'HTTP_REFERER'} = "http://www.google.com"; [pattern2] Log Format : %{X-RT-REF}o perl : addHeader('X-RT-REF' => "http://www.google.com"); [pattern3] Log Format : %{HTTP_REFERER}e perl : $ENV{'HTTP_REFERER'} = "http://www.google.com"; but they didn't work. How can I do it? If you have any idea please teach me. Note that I just want to do this as a countermeasure for illegal access in my intra tool.

    Read the article

  • Ubuntu 12.04 connected to wireless network but internet not working

    - by A.J.
    I can connect to my house's wireless network just fine, but when I'm connected I can't browse the web. Firefox starts connecting to a site and then just poops out. This doesn't happen on my roommates' computers (running Windows) or on our 3DSes, so I know it's just my laptop. I already tried sudo dhclient -r sudo dhclient sudo ifconfig eth0 down sudo ifconfig eth0 up Results of a few commands I was asked to run in comments: ping -c 2 4.2.2.2 PING 4.2.2.2 (4.2.2.2) 56(84) bytes of data. ^C --- 4.2.2.2 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 1007ms ping -c 2 google.com PING google.com (173.194.33.38) 56(84) bytes of data. --- google.com ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 1006ms nm-tool NetworkManager Tool State: connected (global) - Device: eth0 ----------------------------------------------------------------- Type: Wired Driver: atl1c State: unavailable Default: no HW Address: 88:AE:1D:6B:4E:E7 Capabilities: Carrier Detect: yes Speed: 100 Mb/s Wired Properties Carrier: off - Device: wlan0 [JUSTICE] ----------------------------------------------------- Type: 802.11 WiFi Driver: ath9k State: connected Default: yes HW Address: 1C:65:9D:65:C6:31 Capabilities: Speed: 1 Mb/s Wireless Properties WEP Encryption: yes WPA Encryption: yes WPA2 Encryption: yes Wireless Access Points (* = current AP) HOME-9B18: Infra, 00:26:F3:53:9B:18, Freq 2412 MHz, Rate 54 Mb/s, Strength 34 WPA WPA2 cougdad48 Network: Infra, 60:33:4B:E4:C4:5D, Freq 2437 MHz, Rate 54 Mb/s, Strength 22 WPA2 cougdad48 Guest Network: Infra, 66:33:4B:E4:C4:5D, Freq 2437 MHz, Rate 54 Mb/s, Strength 20 WPA2 belkin.ade: Infra, 94:44:52:FF:8A:DE, Freq 2457 MHz, Rate 54 Mb/s, Strength 20 WPA WPA2 *JUSTICE: Infra, 00:24:01:7B:9F:7E, Freq 2462 MHz, Rate 54 Mb/s, Strength 88 WEP CenturyLink: Infra, B2:B2:DC:8E:E2:58, Freq 2462 MHz, Rate 54 Mb/s, Strength 17 WPA WPA2 IPv4 Settings: Address: 192.168.0.11 Prefix: 24 (255.255.255.0) Gateway: 192.168.0.1 DNS: 192.168.0.1 (JUSTICE is my home's network.) ping -c 2 198.168.0.1 PING 198.168.0.1 (198.168.0.1) 56(84) bytes of data. --- 198.168.0.1 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 1007ms

    Read the article

  • Legal issues regarding embedding a toolbar into a browser [closed]

    - by OmarOthman
    We are in the process of developing a software that provides service to internet users and we would like to ask about the legal liabilities of some issues. Of course, everything is to be done with the consent of the user of our software but our concern is about third party tools and services that may be invoked/used by our product. In particular, these are the concerns: (1) Embedding a toolbar to an existing browser. This screenshot is an example, where the words in the highlighted toolbar are passed to www.google.com for searching, and the contents of the window are the results of the search. I want to know if any consent should be obtained before such a toolbar can be embedded in a web browser, whether there are any legal requirements by the web browser; whether different web browsers have different requirements (at least for Internet Explorer, Firefox, Chrome, Opera and Safari). (2) Invoking a free website from that toolbar (like Google’s search page). The screenshot above demonstrates such an existing toolbar. (3) Full ownership and unrestricted access to the data entered to this toolbar. In the screenshot above, I want to take the words (translation english to spanish) and own them, i.e. storing them in my database and do some processing on them. (4) Ability to track the pages entered by the user starting from that free website. In the screenshot above, you can notice that the user opted only for the third result, whose URL is translate.google.com. I want to have access to this and all URLs clicked from this page for some processing as well. This is a commercial application, so I need a very concrete, precise and reference-supported answer.

    Read the article

  • Why don't we just fix Javascript?

    - by Jan Meyer
    Javascript sucks because of a few fatalities well pointed out by Douglas Crockford. We talk a lot about it. But the point here is, why we don't fix it? Coffeescript of course does that and a lot more. But the question here is another: if we provide a webservice that can convert one version of Javascript to the next, and so on, we can keep the language up to date. Such a conversion allows old code to run, albeit with an ever-increasing startup delay, as newer browsers convert old code to the new syntax. To avoid that delay, the site only needs to take the output of the code-transform and paste it in! The effort has immediate benefits for those businesses interested in the results. The rest can sleep tight: their code will continue to run. If we provide backward code-transformation also, then elder browsers can also run ANY new code! Migration scripts should be created by those that make changes to a language. Today they don't, which is in itself a fundamental omission! It should be am obvious part of their job to provide them, as their job isn't really done without them. The onus of making it work should be on them. With this system Any site will be able to run in Any browser, but new code will run best on the newest browsers. This way we reap the benefit of an up-to-date and productive development environment, where today we suffer, supposedly because of yesterday. This is a misconception. We are all trapped in committee-thinking, and we drag along things that only worsen our performance over time! We cause an ever increasing complexity that is hard to underestimate. Javascript is easily fixed. The fact is we don't. As an example, I have seen Patrick Michaud tackle the migration problem in PmWiki. It included forward migration scripts. Whenever syntax changes were made, a migration script was added to transform pages to the new syntax. As far as I know, ALL migrations have worked flawlessly. In other words, we don't tackle the migration problem, we just drag it along. We are incompetent! And why is that? Because technically incompetent people feel they must decide for us. Because they are incompetent, fear rules them. They are obnoxiously conservative, and we suffer the consequence of bad leadership. But the competent don't need to play by the same rules. They can (and must) change them. They are the path forward. It is about time to leave the past behind, and pursue the leanest meanest, no, eternal functionality. That would in and of itself revolutionize programming. So, why don't we stop whining and fix programming? Begin with Javascript and change the world. Even if the browser doesn't hook into this system, coders could. So language updaters should take it upon them to provide migration scripts. Once they exist, browsers may take advantage of them.

    Read the article

  • A new clients come into my web agency. How to configure email and social accounts to work better? [on hold]

    - by Marco Panichi
    I created websites for many years but still have not found the right way to organize all the email and social accounts of every clients. I mean, every web agency follows dozens of customers. Each client needs at least Google Analytics, AdWords, a Facebook page, a Twitter profile, a Youtube channel, probably a listing on Google Places and maybe a Mail Chimp (or similar) account. The web agency, in my opinion, must own these accounts, use them to deliver results to the customer and -of course- make them available to the customer for two reasons: - The customer must be able to see how things are going - The client must have the ability to change web agency without suffering The web agency, however, has many problems in having all of these accounts. For example, I like the idea of having a Gmail account for each client and from that account use all the products of Google. But is not possible to create more than many Gmail account from the same ip address and with the same phone number. The web agency could invite the customer to create his own accounts but: - This is not necessary a value for the customer (indeed...) - The web agency would manage them, however, from the same ip address, incurring in problems - If phone verification occurs, the web agency has to disturb the customer for verification Have you the same problem? How to solve it?

    Read the article

  • Why does 301 redirect work for http but not for https?

    - by Tom G
    Through my domain registrar I have set up a domain, essayme.co.uk, to automatically forward to https://google.com. If I go to http://essayme.co.uk it works as expected and redirects me to https://google.com. $curl -i http://essayme.co.uk HTTP/1.1 301 Moved Permanently Cache-Control: max-age=900 Content-Type: text/html Location: https://google.com Server: Microsoft-IIS/7.5 X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Sat, 07 Jun 2014 11:14:16 GMT Content-Length: 0 Age: 0 Connection: keep-alive However, if I go to https://essayme.co.uk it just freezes and times out. $curl -i https://essayme.co.uk curl: (7) Failed connect to essayme.co.uk:443; Operation timed out What is happening in the second case? (and, if possible, how can I get the redirect to work for https?) Problem background/clarification: I don't have an SSL certificate for the essayme.co.uk domain above, but I do for my live domain (let's call it mywebsite.com), and I was seeing the exact same problem on this domain (hence why I'm trying to debug the problem). Unfortunately I can't experiment with the live domain (as it's live) and I would like to avoid having to buy a second certificate for essayme.co.uk just for debugging (unless absolutely necessary). The problem I was seeing: my live domain, mywebsite.com (not its real name), has a valid SSL certificate. Visiting https://www.mywebsite.com displayed the webpage as expected. I had set up forwarding (like in the question above) from the naked domain (mywebsite.com) to https://www.mywebsite.com) Visiting http://mywebsite.com redirected to https://www.mywebsite.com as expected. However, visiting https://mywebsite.com would freeze and time out (as in the question above). I also tried forwarding it to http://www.otherwebsite.com as an experiment (i.e. forwarding to another site that does not use SSL), but the result was the same: Visiting http://mywebsite.com redirected to http://www.otherwebsite.com as expected. Visiting https://mywebsite.com would freeze and time out again. So I set up essayme.co.uk as an experiment to try and understand why it doesn't work.

    Read the article

  • Will search engines reindex a page that has been set to redirect on to a newer site page?

    - by Luke Duddridge
    We were asked by a client to change a website so that any pages/Urls we were hosting on an older site would now redirect to a newer site hosted somewhere else and a different domain name to boot. We did this by changing each page in the IIS site management, to redirect to a url on their new domain instead of rendering a page locally. According to the redirect tool here: http://www.webconfs.com/redirect-check.php . What we have done is search engine friendly. Problem now is... the client has been on a course learning all about meta tags and so thinks they have a better understanding of the "matrix" (remember there is no spoon). As Google still has the older site appearing in a search, this isnt helping matters. I have tried to explain, we have to wait for Google to reindex. I'm not blowing smoke am I? I'm now starting to wonder... will the older site always appear in a search, even though the pages don't exist? Is there a better way I should be redirecting their site to ensure google will stop keeping an index of pages that no longer exist and would instead replace them with the content in the newer site? a suggestion on the site mentioned above is to use the code: Response.Status="301 Moved Permanently" Response.AddHeader "Location","http://www.new-url.com/" Does using the option in the IIS management tool to redirect the url not do the same?

    Read the article

  • Setting a Static IP Running FreeBSD8 in VirtualBox hosted on Windows 7

    - by gvkv
    I'm using VirtualBox on Windows 7 (host) to run a FreeBSD (guest) based web server. I`ve assigned a static ip of 192.168.80. 1 to the (virtualized) NIC which is run in bridged mode. The problem is that when I ping an external server (such as google.com) I get a No route to host error: dimetro# ping google.com PING google.com (66.249.90.104): 56 data bytes ping: sendto: No route to host ... I can ping the BSD server from both another virtualized machine and my host machine and from the server, I can ping everything on the network. The router ip is 192.168.1.1/16. ADDENDUM: I have the following lines in /etc/rc.conf on the BSD VM to configure networking: defaultrouter="192.168.1.1" ifconfig_em0="inet 192.168.80.1 netmask 255.255.0.0"

    Read the article

  • Linux - How to run Firefox with AT command

    - by conualfy
    I try to run a specified command on a desired time. I found at for this, and it seems to work fine if I run: echo "ls -al / > /home/florin/test.txt" | at 4:21am But I want to run a different thing: /usr/bin/firefox -new-tab http://google.ro I tried adapting the first line with my action (running it in terminal opens a new Firefox tab with http://google.ro, so the command is correct), but with at, it does not work: echo "firefox -new-tab http://google.ro" | at 4:23am The task seems to be scheduled, but it does not run. When running the previous line I get the default reply from at: warning: commands will be executed using /bin/sh Should my Firefox command be differently run in sh? Is there a way to do my action with at, or some other way? Thanks a lot!

    Read the article

  • htaccess IP blocking with custom 403 Error not working

    - by mrc0der
    I'm trying to block everyone but 1 IP address from my site on a server running apache & centos. My setup is follows the example below. My server: `http://www.myserver.com/` My .htaccess file <limit GET> order deny,allow deny from all allow from 176.219.192.141 </limit> ErrorDocument 403 http://www.google.com ErrorDocument 404 http://www.google.com When I visit http://www.myserver.com/ from an invalid IP, it gives me a generic 403 error. When I visit http://www.myserver.com/page-does-not-exist/ it redirects me correctly to http://www.google.com but I can't figure out why the 403 error doesn't redirect me too. Anyone have any ideas?

    Read the article

  • How to send a batch file by email

    - by MikeL
    Trying to send a batch file as an email attachment, I get the following error: mx.google.com rejected your message to the following e-mail addresses: [email protected] mx.google.com gave this error: Our system detected an illegal attachment on your message. Please visit http://support.google.com/mail/bin/answer.py?answer=6590 to review our attachment guidelines. q42si10198525wei.6 Your message wasn't delivered because the recipient's e-mail provider rejected it. This also happens if I place the batch file in a .zip archive. I need to send a batch file to everyone at my company for them to run, preferably without having to change file extensions first. Is this possible by email?

    Read the article

  • Trouble with resolving hostnames on CentOS using Bind

    - by cabaret
    I'm taking a course on server administration at school and I have managed to set up virtual hosting in apache and a dns server on a virtual machine. However, I have now set up an old pc to run CentOS and I'm trying the same on that box. The problem I ran into now is that I can't resolve hostnames from the linux box. I have set up the nameserver in /etc/resolv.conf to the IP of the CentOS machine, but when I try for example ping google.com I get ping: unknown host google.com However, when I do ping 66.102.13.105 (which is the Google IP, figured that out by pinging on my mac) I get: PING 66.102.13.105 (66.102.13.105) 56(84) bytes of data. 64 bytes from 66.102.13.105: icmp_seq=1 ttl=52 time=15.5 ms Slightly confused why this is happening. Could it be because of my router sitting in between the linux machine and the cable modem? It's a D-Link somethingsomething. Thanks in advance

    Read the article

  • What payment gateways do real customers really use when given the choice?

    - by ??????
    I would like to give customers the option of paying however they can whether that be through a proper gateway (e.g. SagePay) or through something else such as PayPal, Amazon Checkout or Google Checkout. Personally I have not bought anything through the Amazon Checkout except for on Amazon.co.uk and my PayPal buys have been limited. As for Google Checkout I have no idea what that is or how it works from a consumer perspective. I understand that people buying from smaller sites are happier to pay by PayPal as they have an account already and trust PayPal. As for Amazon Payments and Google Checkout, do people actually use them if given the choice? There are a lot of people on Kindles these days, happy to buy stuff via Amazon on their Kindle. Would Amazon Payments make sense to this growing crowd? With too many payment gateways on offer it might be confusing at the checkout. Does anyone know if this is a problem for genuine customers? I also have not seen many 'pay by Amazon Payments' icons on websites (you see PayPal all the time). Does advertising the fact that you can pay by Amazon Payments increase sales, e.g. to Kindle owners that have a nebulous book-buying account that 'their other half doesn't know about'?

    Read the article

  • Connected to wireless, but no internet access

    - by boogaloo
    After installing Ubuntu 12.04 a week ago wireless internet had been working fine. It stopped working yesterday, however, and I'm at a loss for what to do even after scouring replies to similar posted problems. I have tried using Google's public DNS and turning off proxy settings on Firefox. I have used nm-tool and lshw to make sure my wireless device and driver are connected. If anyone can help me resolve this issue I would be extremely grateful! @kregerjd $ ping -c3 www.google.com ping: unknown host www.google.com @Alaa: $ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 wlan0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 wlan0 192.168.1.0 0.0.0.0 255.255.255.0 U 2 0 0 wlan0 $ ping -c4 192.168.1.1 PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data From 192.168.1.104 icmp_seq=1 Destination Host Unavailable From 192.168.1.104 icmp_seq=2 Destination Host Unavailable --- 192.168.1.1 ping statistics --- 4 packets transmitted, 0 received, +2 errors, 100% packet loss, time 2998ms pipe 4

    Read the article

  • Internet slow on one router only [the problem only in Ubuntu] [on hold]

    - by mrSuperEvening
    Internet works perfectly on every other router, but browsing sucks at home (slow browsing and slow loading times). I changed DNS servers to 8.8.0.0, still doesn't help. And funnily, download speed is extremely high on this network (meaning torrents for example), but using browsers and loading websites is extremely slow (only on this network). Do I need to change something in router settings or what can I try? By the way, I use wired connection to router. EDIT: There's no problems when using Windows. EDIT: ifconfig: eth0 Link encap:Ethernet HWaddr f2:4d:a0:c0:3f:4c inet addr:192.168.11.8 Bcast:192.168.11.255 Mask:255.255.255.0 inet6 addr: fe80::f24d:a2ff:fec6:3f4c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:206798 errors:0 dropped:0 overruns:0 frame:0 TX packets:219570 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:76680734 (76.6 MB) TX bytes:21738160 (21.7 MB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:160 errors:0 dropped:0 overruns:0 frame:0 TX packets:160 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:11094 (11.0 KB) TX bytes:11094 (11.0 KB)` ping -c 2 4.2.2.2 PING 4.2.2.2 (4.2.2.2) 56(84) bytes of data. --- 4.2.2.2 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 1007ms ping -c 2 google.com PING google.com (213.159.32.147) 56(84) bytes of data. 64 bytes from lan-213-159-32-147.kns.skynet.lv (213.159.32.147): icmp_seq=1 ttl=61 time=0.936 ms 64 bytes from lan-213-159-32-147.kns.skynet.lv (213.159.32.147): icmp_seq=2 ttl=61 time=0.937 ms --- google.com ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.936/0.936/0.937/0.030 ms

    Read the article

  • PHP5 giving failed to open stream: HTTP request failed error when using fopen.

    - by mickey
    Hello everyone. This problem seems to have been discussed in the past everywhere on google and here, but I have yet to find a solution. A very simple fopen gives me a PHP Warning: fopen(http://www.google.ca): failed to open stream: HTTP request failed!". The URL I am fetching have no importance because even when I fetch http://www.google.com it doesnt work. The exact same script works on different server. The one failing is Ubuntu 10.04 and PHP 5.3.2. This is not a problem in my script, it's something different in my server or it might be a bug in PHP. I have tried using a user_agent in php.ini but no success. My allow_url_fopen is set to On. If you have any ideas, feel free!

    Read the article

  • unable to send mail from postfix on Ubuntu 12.04

    - by gilmad
    I'm trying to send an email through Google from my localhost. (via PHP5.3) But Google keeps on blocking my requests. I tried to follow the solutions given to a few similar questions, but for some reason they do not work. I followed these instructions to configure it - http://www.dnsexit.com/support/mailrelay/postfix.html Now for the config data: my main.cf file looks like that: relayhost = [smtp.gmail.com]:587 smtp_fallback_relay = [relay.google.com] smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = my sasl_passwd looks like that: [smtp.gmail.com]:587 [email protected]:password and that is how the mail.log rows look like: Dec 14 10:24:50 COMP-NAME postfix/pickup[5185]: 1C3987E0EDD: uid=33 from= Dec 14 10:24:50 COMP-NAME postfix/cleanup[5499]: 1C3987E0EDD: message-id=<[email protected] Dec 14 10:24:50 COMP-NAME postfix/qmgr[5186]: 1C3987E0EDD: from=, size=483, nrcpt=1 (queue active) Dec 14 10:24:50 COMP-NAME postfix/smtp[5501]: 1C3987E0EDD: to=, relay=smtp.gmail.com[173.194.70.109]:587, delay=0.61, delays=0.19/0/0.32/0.1, dsn=5.7.0, status=bounced (host smtp.gmail.com[173.194.70.109] said: 530 5.7.0 Must issue a STARTTLS command first. w3sm8024250eel.17 (in reply to MAIL FROM command)) Dec 14 10:24:50 COMP-NAME postfix/cleanup[5499]: C20677E0EDE: message-id=<[email protected] Dec 14 10:24:50 COMP-NAME postfix/bounce[5502]: 1C3987E0EDD: sender non-delivery notification: C20677E0EDE Dec 14 10:24:50 COMP-NAME postfix/qmgr[5186]: C20677E0EDE: from=<, size=2532, nrcpt=1 (queue active) Dec 14 10:24:50 COMP-NAME postfix/qmgr[5186]: 1C3987E0EDD: removed

    Read the article

  • Different buddy lists for different accounts in iChat

    - by Idlecool
    I have currently 4 accounts added to iChat, Standard GTalk GTalk For Google Apps Facebook Olark Facebook and Olark have their own Buddy List Group viz. Facebook and WebUser groups and thus those buddies come in a separate list, while the buddies from GTalk and GTalk from Google Apps do not have any group associated with them and they come under Buddies list. It's a bit of a pain because I want to have buddies from GTalk for Google Apps in a separate buddy list than the default one. Is it possible to do it in iChat?

    Read the article

< Previous Page | 439 440 441 442 443 444 445 446 447 448 449 450  | Next Page >