Search Results

Search found 27042 results on 1082 pages for 'google forms'.

Page 464/1082 | < Previous Page | 460 461 462 463 464 465 466 467 468 469 470 471  | Next Page >

  • Default /etc/apt/sources.list?

    - by piemesons
    I need default source list for ubuntu 10.04. Can anybody help me? Here is Mine:--- Ubuntu supported packages deb http://archive.ubuntu.com/ubuntu/ lucid main restricted multiverse universe deb http://archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu/ lucid-updates main restricted multiverse universe deb http://security.ubuntu.com/ubuntu lucid-security main restricted universe multiverse deb http://security.ubuntu.com/ubuntu lucid-proposed main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu/ lucid main restricted multiverse universe deb-src http://archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu/ lucid-updates main restricted multiverse universe deb-src http://security.ubuntu.com/ubuntu lucid-security main restricted universe multiverse deb-src http://security.ubuntu.com/ubuntu lucid-proposed main restricted universe multiverse Canonical Commercial Repository deb http://archive.canonical.com/ubuntu lucid partner deb http://archive.canonical.com/ubuntu lucid-backports partner deb http://archive.canonical.com/ubuntu lucid-updates partner deb http://archive.canonical.com/ubuntu lucid-security partner deb http://archive.canonical.com/ubuntu lucid-proposed partner deb-src http://archive.canonical.com/ubuntu lucid partner deb-src http://archive.canonical.com/ubuntu lucid-backports partner deb-src http://archive.canonical.com/ubuntu lucid-updates partner deb-src http://archive.canonical.com/ubuntu lucid-security partner deb-src http://archive.canonical.com/ubuntu lucid-proposed partner medibuntu deb http://packages.medibuntu.org/ lucid free non-free deb-src http://packages.medibuntu.org/ lucid free non-free PlayOnLinux deb http://deb.playonlinux.com/ lucid main opera deb http://deb.opera.com/opera/ lenny non-free google deb http://dl.google.com/linux/deb/ stable non-free main Dropbox Official Source deb http://linux.dropbox.com/ubuntu karmic main Skype deb http://download.skype.com/linux/repos/debian/ stable non-free This is the error i am getting:-- (sudo apt-get update) Get:9 http://dl.google.com stable/main Packages [1,076B] Err http://ppa.launchpad.net lucid/main Packages 404 Not Found Get:10 http://dl.google.com stable/main Packages [735B] and finally :-- Fetched 9,724B in 3s (2,645B/s) W: Failed to fetch http://ppa.launchpad.net/bisig/ppa/ubuntu/dists/lucid/main/binary-i386/Packages.gz 404 Not Found E: Some index files failed to download, they have been ignored, or old ones used instead.

    Read the article

  • Links to my “Best of 2010” Posts

    - by ScottGu
    I hope everyone is having a Happy New Years! 2010 has been a busy blogging year for me (this is the 100th blog post I’ve done in 2010).  Several people this week suggested I put together a summary post listing/organizing my favorite posts from the year.  Below is a quick listing of some of my favorite posts organized by topic area: VS 2010 and .NET 4 Below is a series of posts I wrote (some in late 2009) about the VS 2010 and .NET 4 (including ASP.NET 4 and WPF 4) release we shipped in April: Visual Studio 2010 and .NET 4 Released Clean Web.Config Files Starter Project Templates Multi-targeting Multiple Monitor Support New Code Focused Web Profile Option HTML / ASP.NET / JavaScript Code Snippets Auto-Start ASP.NET Applications URL Routing with ASP.NET 4 Web Forms Searching and Navigating Code in VS 2010 VS 2010 Code Intellisense Improvements WPF 4 Add Reference Dialog Improvements SEO Improvements with ASP.NET 4 Output Cache Extensibility with ASP.NET 4 Built-in Charting Controls for ASP.NET and Windows Forms Cleaner HTML Markup with ASP.NET 4 - Client IDs Optional Parameters and Named Arguments in C# 4 - and a cool scenarios with ASP.NET MVC 2 Automatic Properties, Collection Initializers and Implicit Line Continuation Support with VB 2010 New <%: %> Syntax for HTML Encoding Output using ASP.NET 4 JavaScript Intellisense Improvements with VS 2010 VS 2010 Debugger Improvements (DataTips, BreakPoints, Import/Export) Box Selection and Multi-line Editing Support with VS 2010 VS 2010 Extension Manager (and the cool new PowerCommands Extension) Pinning Projects and Solutions VS 2010 Web Deployment Debugging Tips/Tricks with Visual Studio Search and Navigation Tips/Tricks with Visual Studio Visual Studio Below are some additional Visual Studio posts I’ve done (not in the first series above) that I thought were nice: Download and Share Visual Studio Color Schemes Visual Studio 2010 Keyboard Shortcuts VS 2010 Productivity Power Tools Fun Visual Studio 2010 Wallpapers Silverlight We shipped Silverlight 4 in April, and announced Silverlight 5 the beginning of December: Silverlight 4 Released Silverlight 4 Tools for VS 2010 and WCF RIA Services Released Silverlight 4 Training Kit Silverlight PivotViewer Now Available Silverlight Questions Announcing Silverlight 5 Silverlight for Windows Phone 7 We shipped Windows Phone 7 this fall and shipped free Visual Studio development tools with great Silverlight and XNA support in September: Windows Phone 7 Developer Tools Released Building a Windows Phone 7 Twitter Application using Silverlight ASP.NET MVC We shipped ASP.NET MVC 2 in March, and started previewing ASP.NET MVC 3 this summer.  ASP.NET MVC 3 will RTM in less than 2 weeks from today: ASP.NET MVC 2: Strongly Typed Html Helpers ASP.NET MVC 2: Model Validation Introducing ASP.NET MVC 3 (Preview 1) Announcing ASP.NET MVC 3 Beta and NuGet (nee NuPack) Announcing ASP.NET MVC 3 Release Candidate 1  Announcing ASP.NET MVC 3 Release Candidate 2 Introducing Razor – A New View Engine for ASP.NET ASP.NET MVC 3: Layouts with Razor ASP.NET MVC 3: New @model keyword in Razor ASP.NET MVC 3: Server-Side Comments with Razor ASP.NET MVC 3: Razor’s @: and <text> syntax ASP.NET MVC 3: Implicit and Explicit code nuggets with Razor ASP.NET MVC 3: Layouts and Sections with Razor IIS and Web Server Stack The IIS and Web Stack teams have made a bunch of great improvements to the core web server this year: Fix Common SEO Problems using the URL Rewrite Extension Introducing the Microsoft Web Farm Framework Automating Deployment with Microsoft Web Deploy Introducing IIS Express SQL CE 4 (New Embedded Database Support with ASP.NET) Introducing Web Matrix EF Code First EF Code First is a really nice new data option that enables a very clean code-oriented data workflow: Announcing Entity Framework Code-First CTP5 Release Class-Level Model Validation with EF Code First and ASP.NET MVC 3 Code-First Development with Entity Framework 4 EF 4 Code First: Custom Database Schema Mapping Using EF Code First with an Existing Database jQuery and AJAX Contributions My team began making some significant source code contributions to the jQuery project this year: jQuery Templates, Data Link and Globalization Accepted as Official jQuery Plugins jQuery Templates and Data Linking (and Microsoft contributing to jQuery) jQuery Globalization Plugin from Microsoft Patches and Hot Fixes Some useful fixes you can download prior to VS 2010 SP1: Patch for Cut/Copy “Insufficient Memory” issue with VS 2010 Patch for VS 2010 Find and Replace Dialog Growing Patch for VS 2010 Scrolling Context Menu Videos of My Talks Some recordings of technical talks I’ve done this year: ASP.NET 4, ASP.NET MVC, and Silverlight 4 Talks I did in Europe VS 2010 and ASP.NET 4 Web Forms Talk in Arizona Other About Technical Debates (and ASP.NET Web Forms and ASP.NET MVC debates in particular) ASP.NET Security Fix Now on Windows Update Upcoming Web Camps I’d like to say a big thank you to everyone who follows my blog – I really appreciate you reading it (the comments you post help encourage me to write it).  See you in the New Year! Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • PPA causing 404 error?

    - by piemesons
    I need default source list for ubuntu 10.04. Can anybody help me? Here is Mine:--- Ubuntu supported packages deb http://archive.ubuntu.com/ubuntu/ lucid main restricted multiverse universe deb http://archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu/ lucid-updates main restricted multiverse universe deb http://security.ubuntu.com/ubuntu lucid-security main restricted universe multiverse deb http://security.ubuntu.com/ubuntu lucid-proposed main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu/ lucid main restricted multiverse universe deb-src http://archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu/ lucid-updates main restricted multiverse universe deb-src http://security.ubuntu.com/ubuntu lucid-security main restricted universe multiverse deb-src http://security.ubuntu.com/ubuntu lucid-proposed main restricted universe multiverse Canonical Commercial Repository deb http://archive.canonical.com/ubuntu lucid partner deb http://archive.canonical.com/ubuntu lucid-backports partner deb http://archive.canonical.com/ubuntu lucid-updates partner deb http://archive.canonical.com/ubuntu lucid-security partner deb http://archive.canonical.com/ubuntu lucid-proposed partner deb-src http://archive.canonical.com/ubuntu lucid partner deb-src http://archive.canonical.com/ubuntu lucid-backports partner deb-src http://archive.canonical.com/ubuntu lucid-updates partner deb-src http://archive.canonical.com/ubuntu lucid-security partner deb-src http://archive.canonical.com/ubuntu lucid-proposed partner medibuntu deb http://packages.medibuntu.org/ lucid free non-free deb-src http://packages.medibuntu.org/ lucid free non-free PlayOnLinux deb http://deb.playonlinux.com/ lucid main opera deb http://deb.opera.com/opera/ lenny non-free google deb http://dl.google.com/linux/deb/ stable non-free main Dropbox Official Source deb http://linux.dropbox.com/ubuntu karmic main Skype deb http://download.skype.com/linux/repos/debian/ stable non-free This is the error i am getting:-- (sudo apt-get update) Get:9 http://dl.google.com stable/main Packages [1,076B] Err http://ppa.launchpad.net lucid/main Packages 404 Not Found Get:10 http://dl.google.com stable/main Packages [735B] and finally :-- Fetched 9,724B in 3s (2,645B/s) W: Failed to fetch http://ppa.launchpad.net/bisig/ppa/ubuntu/dists/lucid/main/binary-i386/Packages.gz 404 Not Found E: Some index files failed to download, they have been ignored, or old ones used instead.

    Read the article

  • Yum update not working on CentOS 6.2 minimal install

    - by Owen
    Note: This is my first question on the stack exchange network so please give mercy and provide guidance where needed. I have installed a CentOS 6.2 KVM guest and I am having problem getting yum to work. This is my first time working with CentOS so I feel that it's a setting somewhere that I am missing but cannot find using google. Here are my steps; Downloaded CentOS-6.2-x86_64-minimal.iso, booted, and went through default steps (only questions asked where keyboard, timezone, root password and use entire hdd) Restarted, logged in, pinged google.com to no avail Set the following settings; vi /etc/resolv.conf nameserver 8.8.8.8 nameserver 8.8.4.4 vi /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE="eth0" HWADDR="52:54:00:42:1B:4A" #NM_CONTROLLED="yes" BOOTPROTO=none ONBOOT="yes" NETMASK=255.255.255.0 IPADDR=192.168.122.151 TYPE=Ethernet vi /etc/sysconfig/network NETWORKING=yes NETWORKING_IPV6=no HOSTNAME=server3.example.com GATEWAY=192.168.122.1 I can now ping google.com ping google.com PING google.com (173.194.70.139) 56(84) bytes of data. 64 bytes from fa-in-f139.1e100.net (173.194.70.139): icmp_seq=1 ttl=50 time=5.88 ms 64 bytes from fa-in-f139.1e100.net (173.194.70.139): icmp_seq=2 ttl=50 time=5.77 ms But I cannot 'yum update' yum update Loaded plugins: fastestmirror, presto Loading mirror speeds from cached hostfile Could not retrieve mirrorlist http://mirrorlist.centos.org/?release=6&arch=x86_64&repo=os error was 14: PYCURL ERROR 7 - "Failed to connect to 2a01:c0:2:4:216:3eff:fe0d:266d: Network is unreachable" Error: Cannot find a valid baseurl for repo: base My KVM guest is also NAT'd incase it's of concern.

    Read the article

  • Pros/Cons of switching from Exchange to GMail

    - by Brent
    We are a medium-large non-profit company, with around 1000 staff and volunteers, and have been using MS Exchange (currently 2003) for our mail system for years. I recently attended a Google conference where they were positing that "Cloud computing is the way of the future", and encouraging us to switch from doing our own email with Exchange, to using GMail and Google Apps for everything. Additionally, one of our departments has been pushing from inside to do this transition within their own department, if not throughout the entire organization. I can definitely see some benefits - such as: Archive space - we never seem to have the space our users want, and of course, the more we get, the more we have to back up OS Agnostic - Exchange is definitely built for windows, and with mac and linux users on the rise, these users increasingly demand better tools / support. Google offers this. Better archiving - potential of e-discovery, that doesn't exist in a practical way with our current setup. Switching would relieve us of a fair bit of server administration, give more options to our end users, and free up the server resources we are now using for Exchange. Our IT department wants to be perceived as providing up-to-date solutions to technical problems, and this change would definitely provide such an image. Google's infrastructure is obviously much more robust than ours, and they employ some of the world's best security and network experts. However, there are also some serious drawbacks: We would be essentially outsourcing one of our mission-critical systems to a 3rd party The switch would inevitably involve Google Apps and perhaps more as well. That means we would have a-lot more at the mercy of a single (potentially weak) password. (is there a way to make this more secure using a password plus physical key of some sort??) Our data would not remain under our roof - or even in our country (Canada). This obviously has plusses on the Disaster Recovery side, but I think there are potential negatives on the legal side. I can't imagine that somebody as large as Google would be as responsive as we would want with regard to non-critical issues such as tracing missing emails, etc. (not sure how much access we would have to basic mail logs - for instance) Can anyone help me evaluate this decision? What issues am I overlooking? What experiences have you had with this transition (or the opposite - gmail to Exchange) Can you add to the points I have already outlined?

    Read the article

  • Guest (and occasional co-host) on Jesse Liberty's Yet Another Podcast

    - by Jon Galloway
    I was a recent guest on Jesse Liberty's Yet Another Podcast talking about the latest Visual Studio, ASP.NET and Azure releases. Download / Listen: Yet Another Podcast #75–Jon Galloway on ASP.NET/ MVC/ Azure Co-hosted shows: Jesse's been inviting me to co-host shows and I told him I'd show up when I was available. It's a nice change to be a drive-by co-host on a show (compared with the work that goes into organizing / editing / typing show notes for Herding Code shows). My main focus is on Herding Code, but it's nice to pop in and talk to Jesse's excellent guests when it works out. Some shows I've co-hosted over the past year: Yet Another Podcast #76–Glenn Block on Node.js & Technology in China Yet Another Podcast  #73 - Adam Kinney on developing for Windows 8 with HTML5 Yet Another Podcast #64 - John Papa & Javascript Yet Another Podcast #60 - Steve Sanderson and John Papa on Knockout.js Yet Another Podcast #54–Damian Edwards on ASP.NET Yet Another Podcast #53–Scott Hanselman on Blogging Yet Another Podcast #52–Peter Torr on Windows Phone Multitasking Yet Another Podcast #51–Shawn Wildermuth: //build, Xaml Programming & Beyond And some more on the way that haven't been released yet. Some of these I'm pretty quiet, on others I get wacky and hassle the guests because, hey, not my podcast so not my problem. Show notes from the ASP.NET / MVC / Azure show: What was just released Visual Studio 2012 Web Developer features ASP.NET 4.5 Web Forms Strongly Typed data controls Data access via command methods Similar Binding syntax to ASP.NET MVC Some context: Damian Edwards and WebFormsMVP Two questions from Jesse: Q: Are you making this harder or more complicated for Web Forms developers? Short answer: Nothing's removed, it's just a new option History of SqlDataSource, ObjectDataSource Q: If I'm using some MVC patterns, why not just move to MVC? Short answer: This works really well in hybrid applications, doesn't require a rewrite Allows sharing models, validation, other code between Web Forms and MVC ASP.NET MVC Adaptive Rendering (oh, also, this is in Web Forms 4.5 as well) Display Modes Mobile project template using jQuery Mobile OAuth login to allow Twitter, Google, Facebook, etc. login Jon (and friends') MVC 4 book on the way: Professional ASP.NET MVC 4 Windows 8 development Jesse and Jon announce they're working on a new book: Pro Windows 8 Development with XAML and C# Jon and Jesse agree that it's nice to be able to write Windows 8 applications using the same skills they picked up for Silverlight, WPF, and Windows Phone development. Compare / contrast ASP.NET MVC and Windows 8 development Q: Does ASP.NET and HTML5 development overlap? Jon thinks they overlap in the MVC world because you're writing HTML views without controls Jon describes how his web development career moved from a preoccupation with server code to a focus on user interaction, which occurs in the browser Jon mentions his NDC Oslo presentation on Learning To Love HTML as Beautiful Code Q: How do you apply C# / XAML or HTML5 skills to Windows 8 development? Q: If I'm a XAML programmer, what's the learning curve on getting up to speed on ASP.NET MVC? Jon describes the difference in application lifecycle and state management Jon says it's nice that web development is really interactive compared to application development Q: Can you learn MVC by reading a book? Or is it a lot bigger than that? What is Azure, and why would I use it? Jon describes the traditional Azure platform mode and how Azure Web Sites fits in Q: Why wouldn't Jesse host his blog on Azure Web Sites? Domain names on Azure Web Sites File hosting options Q: Is Azure just another host? How is it different from any of the other shared hosting options? A: Azure gives you the ability to scale up or down whenever you want A: Other services are available if or when you want them

    Read the article

  • Is there a tool that can test what SSL/TLS cipher suites a particular website offers?

    - by Jeremy Powell
    Is there a tool that can test what SSL/TLS cipher suites a particular website offers? I've tried openssl, but if you examine the output: $ echo -n | openssl s_client -connect www.google.com:443 CONNECTED(00000003) depth=1 /C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA verify error:num=20:unable to get local issuer certificate verify return:0 --- Certificate chain 0 s:/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com i:/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA 1 s:/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA i:/C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority --- Server certificate -----BEGIN CERTIFICATE----- MIIDITCCAoqgAwIBAgIQL9+89q6RUm0PmqPfQDQ+mjANBgkqhkiG9w0BAQUFADBM MQswCQYDVQQGEwJaQTElMCMGA1UEChMcVGhhd3RlIENvbnN1bHRpbmcgKFB0eSkg THRkLjEWMBQGA1UEAxMNVGhhd3RlIFNHQyBDQTAeFw0wOTEyMTgwMDAwMDBaFw0x MTEyMTgyMzU5NTlaMGgxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlh MRYwFAYDVQQHFA1Nb3VudGFpbiBWaWV3MRMwEQYDVQQKFApHb29nbGUgSW5jMRcw FQYDVQQDFA53d3cuZ29vZ2xlLmNvbTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkC gYEA6PmGD5D6htffvXImttdEAoN4c9kCKO+IRTn7EOh8rqk41XXGOOsKFQebg+jN gtXj9xVoRaELGYW84u+E593y17iYwqG7tcFR39SDAqc9BkJb4SLD3muFXxzW2k6L 05vuuWciKh0R73mkszeK9P4Y/bz5RiNQl/Os/CRGK1w7t0UCAwEAAaOB5zCB5DAM BgNVHRMBAf8EAjAAMDYGA1UdHwQvMC0wK6ApoCeGJWh0dHA6Ly9jcmwudGhhd3Rl LmNvbS9UaGF3dGVTR0NDQS5jcmwwKAYDVR0lBCEwHwYIKwYBBQUHAwEGCCsGAQUF BwMCBglghkgBhvhCBAEwcgYIKwYBBQUHAQEEZjBkMCIGCCsGAQUFBzABhhZodHRw Oi8vb2NzcC50aGF3dGUuY29tMD4GCCsGAQUFBzAChjJodHRwOi8vd3d3LnRoYXd0 ZS5jb20vcmVwb3NpdG9yeS9UaGF3dGVfU0dDX0NBLmNydDANBgkqhkiG9w0BAQUF AAOBgQCfQ89bxFApsb/isJr/aiEdLRLDLE5a+RLizrmCUi3nHX4adpaQedEkUjh5 u2ONgJd8IyAPkU0Wueru9G2Jysa9zCRo1kNbzipYvzwY4OA8Ys+WAi0oR1A04Se6 z5nRUP8pJcA2NhUzUnC+MY+f6H/nEQyNv4SgQhqAibAxWEEHXw== -----END CERTIFICATE----- subject=/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com issuer=/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA --- No client certificate CA names sent --- SSL handshake has read 1777 bytes and written 316 bytes --- New, TLSv1/SSLv3, Cipher is AES256-SHA Server public key is 1024 bit Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : AES256-SHA Session-ID: 748E2B5FEFF9EA065DA2F04A06FBF456502F3E64DF1B4FF054F54817C473270C Session-ID-ctx: Master-Key: C4284AE7D76421F782A822B3780FA9677A726A25E1258160CA30D346D65C5F4049DA3D10A41F3FA4816DD9606197FAE5 Key-Arg : None Start Time: 1266259321 Timeout : 300 (sec) Verify return code: 20 (unable to get local issuer certificate) --- it just shows that the cipher suite is something with AES256-SHA. I know I could grep through the hex dump of the conversation, but I was hoping for something a little more elegant. I would prefer Linux tools, but Windows (or other) would be fine. This question is motivated by the security testing I do for PCI and general penetration testing. Update: GregS points out below that the SSL server picks from the cipher suites of the client. So it seems I would need to test all cipher suites one at a time. I think I can hack something together, but is there a tool that does particularly this?

    Read the article

  • Need help merging 2 AHK scripts

    - by Mikey
    i have two functioning scripts that i want to merge into a single AHK File. My problem is that when i combine both scripts, the second script doesnt function or causes an error on script 1. Either way, script 2 ist not functioning at all. Here are some facts: Script 1 = a simple menu script where i want to assign hotkeys to. Script 2 = A small launcher script from a user named Tertius in autohotkey forum. Can someone please look at both codes and help me merge this? The INI File for script 2 looks like this: Keywords.ini npff|Firefox|Firefox gm|Gmail|http://gmail.google.com ;;;;;;;;;;;; BEGIN SCRIPT 2 DetectHiddenWindows, On SetWinDelay, -1 SetKeyDelay, -1 SetBatchLines, -1 GoSub Remin SetTimer, Remin, % 1000 * 60 Loop, read, %A_ScriptDir%\keywords.ini { LineNumber = %A_Index% Loop, parse, A_LoopReadLine, | { if (A_Index == 1) abbrevs%LineNumber% := A_LoopField else if (A_Index == 2) tips%LineNumber% := A_LoopField else if (A_Index == 3) programs%LineNumber% := A_LoopField else if (A_Index == 4) params%LineNumber% := A_LoopField } tosay := abbrevs%LineNumber% } cnt = %LineNumber% Loop { Input, Key, L1 V, % "{LControl}{RControl}{LAlt}{RAlt}{LShift}{RShift}{LWin}{RWin}" . "{AppsKey}{F1}{F2}{F3}{F4}{F5}{F6}{F7}{F8}{F9}{F10}{F11}{F12}{Left}{Right}{Up}{Down}" . "{Home}{End}{PgUp}{PgDn}{Del}{Ins}{BS}{Capslock}{Numlock}{PrintScreen}{Pause}{Escape}" If( ( Asc(Key) = 65 && Asc(Key) <= 90 ) || ( Asc(Key) = 97 && Asc(Key) <= 122 ) ) Word .= Key Else { Word := "" Continue } tipup := false Loop %cnt% { if (Word == abbrevs%A_index%) { tip := tips%A_index% ToolTip %tip% tipup := true } else { if (tipup == false) ToolTip } } } $Tab:: Loop %cnt% { if (Word != "" && Word == abbrevs%A_index%) { Word := "" StringLen, len, abbrevs%A_index% Loop %len% Send {Shift Down}{Left} Send {Shift Up}{BS} ToolTip program := programs%A_index% param := params%A_index% run, %program% %param% return } } Word := "" Send {Tab} Return ~LButton:: ~MButton:: ~RButton:: ~XButton1:: ~XButton2:: Word := "" Tooltip Return Remin: WinMinimize, %A_ScriptFullPath% - AutoHotkey v WinHide, %A_ScriptFullPath% - AutoHotkey v Return ;;;;;;;;;; END SCRIPT 2 ;;;;;;;;;;;;;; BEGIN SCRIPT 1 ;This is a working script that creates a popup menu. ; Create the popup menu by adding some items to it. Menu, MyMenu, Add, FIS 201, MenuHandler Menu, MyMenu, Add ; Add a separator line. Menu, MyMenu, Color, Lime, Single ;Define the Menu Color ; Create another menu destined to become a submenu of the above menu. Menu, Submenu1, Add, Item2, MenuHandler Menu, Submenu1, Add, Item3, MenuHandler Menu, Submenu1, Color, Yellow ;Define the Menu Color ; Create another menu destined to become a submenu of the above menu. Menu, Submenu2, Add, Item1a, MenuHandler Menu, Submenu2, Add, Item2a, MenuHandler Menu, Submenu2, Add, Item3a, MenuHandler Menu, Submenu2, Add, Item4a, MenuHandler Menu, Submenu2, Add, Item5a, MenuHandler Menu, Submenu2, Add, Item6a, MenuHandler Menu, Submenu2, Color, Aqua ;Define the Menu Color ; Create a submenu in the first menu (a right-arrow indicator). When the user selects it, the second menu is displayed. Menu, MyMenu, Add, BKRS 119, :Submenu1 Menu, MyMenu, Add ; Add a separator line below the submenu. Menu, MyMenu, Add, BKRS 201, :Submenu2 Menu, MyMenu, Add ; Add a separator line below the submenu. Menu, MyMenu, Add ; Add a separator line below the submenu. Menu, MyMenu, Add, Google Search, Google ; Add another menu item beneath the submenu. return ; End of script's auto-execute section. Capslock & LButton::Menu, MyMenu, Show ; i.e. press the Win-Z hotkey to show the menu. MenuHandler: MsgBox You selected %A_ThisMenuItem% from the menu %A_ThisMenu%. return ;;;;;;;;;;;;;;;;;;;;;;;; ;;;;;;;; Google Search ;;; FORMAT InputBox, OutputVar [, Title, Prompt, HIDE, Width, Height, X, Y, Font, Timeout, Default] Google: InputBox, SearchTerm, Google Search,,,350, 120 if SearchTerm < "" Run http://www.google.de/search?sclient=psy-ab&hl=de&site=&source=hp&q=%SearchTerm%&btnG=Suche return ; Make Window Transparent Space::WinSet, Transparent, 125, A ^!Space UP::WinSet, Transparent, OFF, A return ;;;;;;;;;;; END SCRIPT 1 Help is appreciated. Kind Regards, Mikey

    Read the article

  • Homepage 301 Redirect to SSL Homepage

    - by user33692
    I'm hoping somebody might be able to provide a bit of advice on an issue I am having. I have 1 site where we implemented a 301 redirect on the homepage from http to https. We have links on the homepage to other parts of the site that are not under SSL (in fact there is only one other page under SSL). When I go to our webmaster account I notice that we are not being provided with any webmaster information (search queries, backlinks) related to our homepage under SSL. I performed a Fetch Google on the homepage and the information it returned is: HTTP/1.1 301 Moved Permanently Date: Fri, 08 Nov 2013 17:26:24 GMT Server: Apache/2.2.16 (Debian) Location: https://mysite.com/ Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 242 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/html; charset=iso-8859-1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>301 Moved Permanently</title> </head><body> <h1>Moved Permanently</h1> <p>The document has moved <a href="https://mysite.com/">here</a>.</p> <hr> <address>Apache/2.2.16 (Debian) Server at mysite.com</address> </body></html> I am worried that the fact that Google Fetch is not getting the correct Title Tags and Meta information from our homepage and that this is hurting our search results. Additionally, I am worried that we need to do something specific with the SiteMap to ensure that Google is correctly indexing all our pages and being able to flow from the https to the http without issues. Does anybody have any advice on how we can correctly set this up or be sure that Google is fetching the correct information?

    Read the article

  • How do I do an exact whois search?

    - by brianegge
    When I execute the following whois command on my Ubuntu server, I get all sorts of other domains which contain google.com in the name, but clearly aren't owned by google. As this appears to be some sort of spam, I won't paste the output here. I'd like to check for exactly the name I typed in. I thought the following would work, but it doesn't. What is the proper way to do an exact match? whois -Hx google.com

    Read the article

  • Is this SPF record correct for me?

    - by DT
    I'm completely new to Stack Overflow, so Hi! I need to add an SPF record to my site "main.com" (not the real address) to allow an email publishing company "emailpublishers.com" (not the real address) to send emails on my behalf. However, I'm nervous about adding an SPF record because of the havoc it could wreak if done incorrectly. I use Google Apps. I also use "auxiliary.com" to send mail from "main.com." And, of course, I use "main.com" to send mail as well. "auxiliary.com" doesn't have an SPF record. I used Microsofts' and OpenSPF's wizards to generate the following SPF entry. Does it seem to be correct for me? "v=spf1 a mx ip4:55.55.555.55 mx:alt1.aspmx.l.google.com mx:alt2.aspmx.l.google.com mx:aspmx.l.google.com mx:aspmx2.googlemail.com mx:aspmx3.googlemail.com mx:aspmx4.googlemail.com mx:aspmx5.googlemail.com a:auxiliary.com include:_spf.google.com include:auxiliary.com mx:auxiliary.com include:emailpublishers.com mx:emailpublishers.com ~all" However, my host MediaTemple says in a knowledge base article to use: v=spf1 a:main.com/20 ~all So that added to my confusion. Thanks a lot!

    Read the article

  • Easy solution to monitoring & blocking connections to non-malicious services, IP's, and tracking companies

    - by binarybunny
    Our family lives in the middle of nowhere, so the only high-speed internet available is Verizon's 3G mobile broadband. We have the highest package available, yet continually go over the 10GB limit and get charged $10 every 1GB we go over. We run a business from home, so stopping when we hit the limit is not an option. I've found the majority of connections are to Google, Microsoft, Akamai, Facebook, and other web service companies (mainly google). I know these are harmless connections, but when it costs money for them to monitor our web activity it becomes a serious problem. Here's some things I've done, but I'm sure there's something else that could help before blocking a huge set of IP ranges: stopped using windows (on my machine) use MVPS host file on all computers use firefox on all computers (with don't track me option) ad block plugin on all browsers blocking google updates blocking windows updates block images in browsers (when possible) use comodo (paranoia-level style of blocking..) virus-free computers with ESET NOD32 bought router and installed dd-wrt in attempt to block connections more diligently (and throttle bandwidth if it comes to that) Anything I'm missing? I know Google analytics is on almost all websites, as well as FB like buttons but I would like to be able to stop these connections without blocking use of google services like gmail, etc. Any ideas?

    Read the article

  • Blocking 'good' bots in nginx with multiple conditions for certain off-limits URL's where humans can go

    - by Glenn Plas
    After 2 days of searching/trying/failing I decided to post this here, I haven't found any example of someone doing the same nor what I tried seems to be working OK. I'm trying to send a 403 to bots not respecting the robots.txt file (even after downloading it several times). Specifically Googlebot. It will support the following robots.txt definition. User-agent: * Disallow: /*/*/page/ The intent is to allow Google to browse whatever they can find on the site but return a 403 for the following type of request. Googlebot seems to keep on nesting these links eternally adding paging block after block: my_domain.com:80 - 66.x.67.x - - [25/Apr/2012:11:13:54 +0200] "GET /2011/06/ page/3/?/page/2//page/3//page/2//page/3//page/2//page/2//page/4//page/4//pag e/1/&wpmp_switcher=desktop HTTP/1.1" 403 135 "-" "Mozilla/5.0 (compatible; G ooglebot/2.1; +http://www.google.com/bot.html)" It's a wordpress site btw. I don't want those pages to show up, even though after the robots.txt info got through, they stopped for a while only to begin crawling again later. It just never stops .... I do want real people to see this. As you can see, google get a 403 but when I try this myself in a browser I get a 404 back. I want browsers to pass. root@my_domain:# nginx -V nginx version: nginx/1.2.0 I tried different approaches, using a map and plain old nono if's and they both act the same: (under http section) map $http_user_agent $is_bot { default 0; ~crawl|Googlebot|Slurp|spider|bingbot|tracker|click|parser|spider 1; } (under the server section) location ~ /(\d+)/(\d+)/page/ { if ($is_bot) { return 403; # Please respect the robots.txt file ! } } I recently had to polish up my Apache skills for a client where I did about the same thing like this : # Block real Engines , not respecting robots.txt but allowing correct calls to pass # Google RewriteCond %{HTTP_USER_AGENT} ^Mozilla/5\.0\ \(compatible;\ Googlebot/2\.[01];\ \+http://www\.google\.com/bot\.html\)$ [NC,OR] # Bing RewriteCond %{HTTP_USER_AGENT} ^Mozilla/5\.0\ \(compatible;\ bingbot/2\.[01];\ \+http://www\.bing\.com/bingbot\.htm\)$ [NC,OR] # msnbot RewriteCond %{HTTP_USER_AGENT} ^msnbot-media/1\.[01]\ \(\+http://search\.msn\.com/msnbot\.htm\)$ [NC,OR] # Slurp RewriteCond %{HTTP_USER_AGENT} ^Mozilla/5\.0\ \(compatible;\ Yahoo!\ Slurp;\ http://help\.yahoo\.com/help/us/ysearch/slurp\)$ [NC] # block all page searches, the rest may pass RewriteCond %{REQUEST_URI} ^(/[0-9]{4}/[0-9]{2}/page/) [OR] # or with the wpmp_switcher=mobile parameter set RewriteCond %{QUERY_STRING} wpmp_switcher=mobile # ISSUE 403 / SERVE ERRORDOCUMENT RewriteRule .* - [F,L] # End if match This does a bit more than I asked nginx to do but it's about the same principle, I'm having a hard time figuring this out for nginx. So my question would be, why would nginx serve my browser a 404 ? Why isn't it passing, The regex isn't matching for my UA: "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.30 Safari/536.5" There are tons of example to block based on UA alone, and that's easy. It also looks like the matchin location is final, e.g. it's not 'falling' through for regular user, I'm pretty certain that this has some correlation with the 404 I get in the browser. As a cherry on top of things, I also want google to disregard the parameter wpmp_switcher=mobile , wpmp_switcher=desktop is fine but I just don't want the same content being crawled multiple times. Even though I ended up adding wpmp_switcher=mobile via the google webmaster tools pages (requiring me to sign up ....). that also stopped for a while but today they are back spidering the mobile sections. So in short, I need to find a way for nginx to enforce the robots.txt definitions. Can someone shell out a few minutes of their lives and push me in the right direction please ? I really appreciate ANY response that makes me think harder ;-)

    Read the article

  • Different nginx rules based on referrer

    - by juana
    I'm using WordPress with WP Super Cache. I want visitors who come from Google (That inlcudes all country specific referrers like google.co.in, google.co.uk and etc.) to see uncached contents. There are my nginx rules which are not working the way I want: server { server_name website.com; location / { root /var/www/html/website.com; index index.php; if ($http_referer ~* (www.google.com|www.google.co) ) { rewrite . /index.php break; } if (-f $request_filename) { break; } set $supercache_file ''; set $supercache_uri $request_uri; if ($request_method = POST) { set $supercache_uri ''; } if ($query_string) { set $supercache_uri ''; } if ($http_cookie ~* "comment_author_|wordpress|wp-postpass_" ) { set $supercache_uri ''; } if ($supercache_uri ~ ^(.+)$) { set $supercache_file /wp-content/cache/supercache/$http_host/$1index.html; } if (-f $document_root$supercache_file) { rewrite ^(.*)$ $supercache_file break; } if (!-e $request_filename) { rewrite . /index.php last; } } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/html/website.com$fastcgi_script_name; include fastcgi_params; } } What should I do to achieve my goal?

    Read the article

  • PHP Mail() to Gmail = Spam

    - by grantw
    Recently Gmail has started marking emails sent directly from my server (using php mail()) as spam and I'm having problems trying to find the issue. If I send an exact copy of the same email from my email client it goes to the Gmail inbox. The emails are plain text, around 7 lines long and contain a URL link in plain text. As the emails sent from my client are getting through fine I'm thinking that the content isn't the issue. It would be greatly appreciated if someone could take a look at the the following headers and give me some advice why the email from the server is being marked as spam. Email from Server: Delivered-To: [email protected] Received: by 10.49.98.228 with SMTP id el4csp101784qeb; Thu, 15 Nov 2012 14:58:52 -0800 (PST) Received: by 10.60.27.166 with SMTP id u6mr2296595oeg.86.1353020331940; Thu, 15 Nov 2012 14:58:51 -0800 (PST) Return-Path: [email protected] Received: from dom.domainbrokerage.co.uk (dom.domainbrokerage.co.uk. [174.120.246.138]) by mx.google.com with ESMTPS id df4si17005013obc.50.2012.11.15.14.58.51 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 15 Nov 2012 14:58:51 -0800 (PST) Received-SPF: pass (google.com: domain of [email protected] designates 174.120.246.138 as permitted sender) client-ip=174.120.246.138; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 174.120.246.138 as permitted sender) [email protected]; dkim=pass [email protected] DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=domainbrokerage.co.uk; s=default; h=Date:Message-Id:Content-Type:Reply-to:From:Subject:To; bh=2RJ9jsEaGcdcgJ1HMJgQG8QNvWevySWXIFRDqdY7EAM=; b=mGebBVOkyUhv94ONL3EabXeTgVznsT1VAwPdVvpOGDdjBtN1FabnuFi8sWbf5KEg5BUJ/h8fQ+9/2nrj+jbtoVLvKXI6L53HOXPjl7atCX9e41GkrOTAPw5ZFp+1lDbZ; Received: from grantw by dom.domainbrokerage.co.uk with local (Exim 4.80) (envelope-from [email protected]) id 1TZ8OZ-0008qC-Gy for [email protected]; Thu, 15 Nov 2012 22:58:51 +0000 To: [email protected] Subject: Offer Accepted X-PHP-Script: www.domainbrokerage.co.uk/admin.php for 95.172.231.27 From: My Name [email protected] Reply-to: [email protected] Content-Type: text/plain; charset=Windows-1251 Message-Id: [email protected] Date: Thu, 15 Nov 2012 22:58:51 +0000 X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - dom.domainbrokerage.co.uk X-AntiAbuse: Original Domain - gmail.com X-AntiAbuse: Originator/Caller UID/GID - [500 500] / [47 12] X-AntiAbuse: Sender Address Domain - domainbrokerage.co.uk X-Get-Message-Sender-Via: dom.domainbrokerage.co.uk: authenticated_id: grantw/from_h Email from client: Delivered-To: [email protected] Received: by 10.49.98.228 with SMTP id el4csp101495qeb; Thu, 15 Nov 2012 14:54:49 -0800 (PST) Received: by 10.182.197.8 with SMTP id iq8mr2351185obc.66.1353020089244; Thu, 15 Nov 2012 14:54:49 -0800 (PST) Return-Path: [email protected] Received: from dom.domainbrokerage.co.uk (dom.domainbrokerage.co.uk. [174.120.246.138]) by mx.google.com with ESMTPS id ab5si17000486obc.44.2012.11.15.14.54.48 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 15 Nov 2012 14:54:49 -0800 (PST) Received-SPF: pass (google.com: domain of [email protected] designates 174.120.246.138 as permitted sender) client-ip=174.120.246.138; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 174.120.246.138 as permitted sender) [email protected]; dkim=pass [email protected] DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=domainbrokerage.co.uk; s=default; h=Content-Transfer-Encoding:Content-Type:Subject:To:MIME-Version:From:Date:Message-ID; bh=bKNjm+yTFZQ7HUjO3lKPp9HosUBfFxv9+oqV+NuIkdU=; b=j0T2XNBuENSFG85QWeRdJ2MUgW2BvGROBNL3zvjwOLoFeyHRU3B4M+lt6m1X+OLHfJJqcoR0+GS9p/TWn4jylKCF13xozAOc6ewZ3/4Xj/YUDXuHkzmCMiNxVcGETD7l; Received: from w-27.cust-7941.ip.static.uno.uk.net ([95.172.231.27]:1450 helo=[127.0.0.1]) by dom.domainbrokerage.co.uk with esmtpa (Exim 4.80) (envelope-from [email protected]) id 1TZ8Ke-0001XH-7p for [email protected]; Thu, 15 Nov 2012 22:54:48 +0000 Message-ID: [email protected] Date: Thu, 15 Nov 2012 22:54:50 +0000 From: My Name [email protected] User-Agent: Postbox 3.0.6 (Windows/20121031) MIME-Version: 1.0 To: [email protected] Subject: Offer Accepted Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - dom.domainbrokerage.co.uk X-AntiAbuse: Original Domain - gmail.com X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12] X-AntiAbuse: Sender Address Domain - domainbrokerage.co.uk X-Get-Message-Sender-Via: dom.domainbrokerage.co.uk: authenticated_id: [email protected]

    Read the article

  • Is meta description still relevant?

    - by Jeff Atwood
    I received this bit of advice about the meta description tag recently: Meta descriptions are used by Google probably 80% of the time for the snippet. They don’t help with rankings but you should probably use them. You could just auto generate them from the first part of the question. The description tag exists in the header, like so: <meta name="Description" content="A brief summary of the content on the page."> I'm not sure why we would need this field, as Google seems perfectly capable of showing the relevant search terms in context in the search result pages, like so (I searched for c# list performance): In other words, where would a meta description summary improve these results? We want the page to show context around the actual search hits, not a random summary we inserted! Google Webmaster Central has this advice: For some sites, like news media sources, generating an accurate and unique description for each page is easy: since each article is hand-written, it takes minimal effort to also add a one-sentence description. For larger database-driven sites, like product aggregators, hand-written descriptions are more difficult. In the latter case, though, programmatic generation of the descriptions can be appropriate and is encouraged -- just make sure that your descriptions are not "spammy." Good descriptions are human-readable and diverse, as we talked about in the first point above. The page-specific data we mentioned in the second point is a good candidate for programmatic generation. I'm struggling to think of any scenario when I would want the Google-generated summary, that is, actual context from the page for the search terms, to be replaced by a hard-coded meta description summary of the question itself.

    Read the article

  • Where does Picasa store albums?

    - by Dan
    For people searching, the question might also be phrased: How do I restore Picasa albums from backup? When I reinstalled my computer and restored my photos from backup, some of my albums showed up, but many didn't. I've found the following info: Picasa on Windows stores (stored?) album info in these places: Vista: C:\Users\<myaccount>\AppData\Local\Google\Picasa2Albums\ XP: C:\Documents and Settings\<myaccount>\Local Settings\Application Data\google\Picasa2Albums\ I restored that folder and was still missing many of my albums. That folder also contained a folder of backups, but the most recent one was from a long time ago and I've created albums since then. According to https://support.google.com/picasa/bin/picasa.google.com/support/bin/static.py?hl=en&page=release_notes.cs, since the Dec 8, 2011 build, Picasa saves album info in .ini file(s). This probably explains the albums that I do see. http://katelharrison.blogspot.com/2012/01/how-to-restore-picasa-albums-mac.html has some great info on restoring albums on Macs, but the folder structure seems to be different there than on Windows.

    Read the article

  • Solr Autosuggest

    - by rahul
    Hi, I am using Solr (1.4) AutoSuggest feature using termsComponent. Currently, if I type 'goo' means, Solr suggest words like 'google'. But I would like to receive suggestions like 'google, google alerts, ..' . ie, suggestions with single and multiple terms. Not sure, whether I need to use edgengrams for that. for eg, indexing google like 'go', 'oo', 'og', ... . But I think I don't need this, Since I don't want partial search. Please let me know if there is any way to do multiple word suggestions . Thanks in Advance.

    Read the article

  • Gmail sends bulk messages sent by postfix to spam - spf, rDNS are set up (headers inside)

    - by snitko
    here are the headers of the blocked messages (actual domain replaced with domain.com, ip address with n.n.n.n and gmail account name with person.account): Delivered-To: [email protected] Received: by 10.216.89.137 with SMTP id c9cs247685wef; Tue, 6 Dec 2011 16:06:37 -0800 (PST) Received: by 10.224.199.134 with SMTP id es6mr14447757qab.2.1323216395590; Tue, 06 Dec 2011 16:06:35 -0800 (PST) Return-Path: <[email protected]> Received: from mail.domain.com (domain.com. [n.n.n.n]) by mx.google.com with ESMTP id b16si7471407qcv.131.2011.12.06.16.06.35; Tue, 06 Dec 2011 16:06:35 -0800 (PST) Received-SPF: pass (google.com: domain of [email protected] designates n.n.n.n as permitted sender) client-ip=n.n.n.n; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates n.n.n.n as permitted sender) [email protected] Received: by mail.domain.com (Postfix, from userid 5001) id 26ADE381E3; Tue, 6 Dec 2011 19:06:35 -0500 (EST) Received: from domain.com (domain.com [127.0.0.1]) by mail.domain.com (Postfix) with ESMTP id 0148638030 for <[email protected]>; Tue, 6 Dec 2011 19:06:31 -0500 (EST) Date: Tue, 06 Dec 2011 19:06:31 -0500 From: DomainApp <[email protected]> Reply-To: [email protected] To: [email protected] Message-ID: <[email protected]> Subject: Roman Snitko says hi Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-No-Spam: True Precedence: bulk List-Unsubscribe: [email protected] Messages go to Spam folder on various gmail accounts, so it's not a coincidence. I followed all gmail guides on sending bulk emails from here https://mail.google.com/support/bin/answer.py?hl=en&answer=81126. I also checked my ip-address here http://www.dnsblcheck.co.uk/ and it's NOT on the blacklists. Thus I have two questions: What may be the possible reason for the messages to go to Spam folder? Is there any way to contact Google and ask them what causes this? Update: I have set up openDKIM on my server, everything works, gmail message headers say that dkim=pass, which means everything is set up correctly. Messages still end up in Spam folder.

    Read the article

  • C-panel mail goes into spam instead of inbox in gmail

    - by Robin Jain
    I have c-panel vps server. I have created a domain on the same server, but when I send mail through webmail to gmail email id it goes into the spam folder. Note--->Mail ip note blacklisted Spf records enable DKIM enable reverse dns are perfect ====================================================================== Email header Information: Delivered-To: [email protected] Received: by 10.143.93.13 with SMTP id v13csp119806wfl; Fri, 6 Jul 2012 08:01:36 -0700 (PDT) Received: by 10.182.52.42 with SMTP id q10mr26133912obo.46.1341586895571; Fri, 06 Jul 2012 08:01:35 -0700 (PDT) Return-Path: <[email protected]> Received: from lakshyacs-u.securehostdns.com ([50.97.147.134]) by mx.google.com with ESMTPS id fx3si18028369obc.144.2012.07.06.08.01.35 (version=TLSv1/SSLv3 cipher=OTHER); Fri, 06 Jul 2012 08:01:35 -0700 (PDT) Received-SPF: pass (google.com: domain of [email protected] designates 50.97.147.134 as permitted sender) client-ip=50.97.147.134; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 50.97.147.134 as permitted sender) [email protected] Received: from localhost.localdomain ([127.0.0.1]:39016 helo=harishjoshico.com) by lakshyacs-u.securehostdns.com with esmtpa (Exim 4.77) (envelope-from <[email protected]>) id 1SnA2J-0006Nq-05 for [email protected]; Fri, 06 Jul 2012 20:31:35 +0530 Received: from 223.189.14.213 ([223.189.14.213]) (SquirrelMail authenticated user [email protected]) by harishjoshico.com with HTTP; Fri, 6 Jul 2012 20:31:35 +0530 Message-ID: <[email protected]> Date: Fri, 6 Jul 2012 20:31:35 +0530 Subject: ggglkhl From: [email protected] To: [email protected] User-Agent: SquirrelMail/1.4.22 MIME-Version: 1.0 Content-Type: text/plain;charset=iso-8859-1 Content-Transfer-Encoding: 8bit X-Priority: 3 (Normal) Importance: Normal X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - lakshyacs-u.securehostdns.com X-AntiAbuse: Original Domain - gmail.com X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12] X-AntiAbuse: Sender Address Domain - harishjoshico.com jhkhl ================================================================

    Read the article

  • Rsync and wildcards

    - by Jay White
    I am trying to back up both the "Last Session" and "Current Session" files for Google Chrome in one command, but using a wildcard doesn't seem to work. I am trying with the following command rsync -e "ssh -i new.key" -r --verbose -tz --stats --progress --delete '/cygdrive/c/Users/jay/AppData/Local/Google/Chrome/User Data/Default/*Session' user@host:"/chrome\ sessions/" and get the following error rsync: link_stat "/cygdrive/c/Users/jay/AppData/Local/Google/Chrome/User Data/Default/*Session" failed: No such file or directory (2) What am I doing wrong?

    Read the article

  • SPF record for Gmail?

    - by Chris
    I have DNS, with a SPF TXT record, configured for a domain name. The primary user of the domain name now needs to be able to send both from our SMTP servers, and also from her GMail account. I've seen all the information about adding "include:_spf.google.com" to the SPF TXT record, but, as I look into it, it appears that record is outdated. In particular, I had the user send me a test message, and note that it was: Received: from mail-la0-f50.google.com (mail-la0-f50.google.com [209.85.215.50]) However, _spf.google.com doesn't list that IP address: $ dig +short _spf.google.com txt "v=spf1 ip4:216.239.32.0/19 ip4:64.233.160.0/19 ip4:66.249.80.0/20 ip4:72.14.192.0/18 ip4:209.85.128.0/17 ip4:66.102.0.0/20 ip4:74.125.0.0/16 ip4:64.18.0.0/20 ip4:207.126.144.0/20 ip4:173.194.0.0/16 ?all" (Note that a 209.85.21*8*.0 network is listed, but not 209.85.21*5*.0.) Is there a better way to enable sending from GMail? This user sends to at least one recipient with a strict SPF policy that bounces mail not from a designated host... Many thanks!

    Read the article

  • DNS zone file SPF configuration to support sending mail from multiple servers and gmail

    - by Tauren
    I want to configure SPF on a domain to allow mail to be sent from: the x.com website server (x.com and www.x.com - both at same IP) it's MX servers (smtp.x.com, mx.x.com, mail.x.com) another server that isn't listed as an MX server (somehost.x.com) via gmail using an account that has authenticated use of [email protected] Will this zone file work? If not, what are the problems with it? $ttl 38400 @ IN SOA ns1.x.com. hostmaster.x.com. ( 201003092 ; serial 8H ; refresh 15M ; retry 1W ; expire 1H ) ; minimum @ NS ns1.x.com. @ NS ns2.x.com. @ MX 10 mx.x.com. @ MX 20 smtp.x.com. @ MX 30 mailhost.x.com. ; SPF records @ IN TXT "v=spf1 a mx a:somehost.x.com include:_spf.google.com ~all" mx IN TXT "v=spf1 a -all" smtp IN TXT "v=spf1 a -all" mailhost IN TXT "v=spf1 a -all" Questions: Is _spf.google.com the right thing to include for gmail.com, or is it only for Google Hosted Apps? If only for Google Apps, what should I include to send from gmail.com? If mail shouldn't be sent from anywhere else, is it safe to use -all instead of ~all? Does it make sense to add specific SPF records for each of the mail servers? Any other problems with the zone file? I want to confirm these things before making changes to my zone file. The file has SPF configured basically the same now, just without google.com and somehost, but I want to make sure I won't break things when I change it.

    Read the article

  • Multi language site - use of canonical link and link rel="alternate"

    - by julia
    I keep reading everywhere that if you have a multilanguage site, where the same page appears in, say, French and English, then this is considered as duplicate content by google. It is written that using canonical link is the solution, but I do not understand how to use it in this case. Should I: Choose either French URL or English URL to be the canonical (main) one, and where I will place the canonical link? If so, how do I decide which of the two URLs must be canonical? both languages are important to me and I want the content under both languages to be indexed by google and served to the user, depending on the language in which he searches. OR should I place a canonical link on both French and English URLs? If so, then I do not understand the meaning of using the canonical link? In this case would both URLs be indexed, are both of them considered as "important" by google and not duplicates? Also I read that link rel="alternate" can be used to indicate to google that, for example the French URL is the French-language equivalent of the English page. This makes sense and I understand how to use such links, but how are they combined with canonical links? Should I define both the canonical URL AND specify rel="alternate" in both URLs? Could someone help me to clarify this, cause I'm stuck with this and can't seem to find a good-enough explanation in different sources.

    Read the article

  • Name resolution not working with ipv6 on centos

    - by jolivier
    I just installed CentOs 6.3 on a server to be installed in a data center, but cannot get name resolution / curl to work. I know this is because of it trying to use ipv6, since ping google.com works, curl -4 google.com works, but not curl google.com. I removed the ipv6 adress from the interface and it does not change anything. This is very problematic since most system tools like yum fail at name resolution currently. Browsers like Firefox work because they might be using another tool for name resolution than the one use by curl. I managed to fix this on workstations by completely disabling ipv6 following tutorials like this one / hardcoding name resolution in /etc/hosts. But since I am here configuring a server which will be later installed in a remote data center, I would like not to mess up, understand what is going on and fix it properly. Besides, I will face the same issue with more servers to come so I would really appreciate your help in understanding this problem and how to solve it. I would be happy to provide more information if needed to help understand what is going on. The current network configuration is a small enterprise network, with a DNS server (let's call it A) configured once a long time ago. dig google.com and dig -4 google.com are both refused by the A DNS. But this is also true for my workstation on which curl is working (and yes they both use the same A DNS server). Indeed this faulty server and my workstation have multiple nameservers in /etc/resolv.conf, and the second one is working fine for both of them, so if I remove A from my resolv.conf everything works fine! Regards, Olivier

    Read the article

< Previous Page | 460 461 462 463 464 465 466 467 468 469 470 471  | Next Page >