Search Results

Search found 3578 results on 144 pages for 'smtp auth'.

Page 108/144 | < Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >

  • django model Form. Include fields from related models

    - by Tom
    Hi. I have a model, called Student, which has some fields, and a OneToOne relationship with user (django.contrib.auth.User). class Student(models.Model): phone = models.CharField(max_length = 25 ) birthdate = models.DateField(null=True) gender = models.CharField(max_length=1,choices = GENDER_CHOICES) city = models.CharField(max_length = 50) personalInfo = models.TextField() user = models.OneToOneField(User,unique=True) Then, I have a ModelForm for that model class StudentForm (forms.ModelForm): class Meta: model = Student Using the fields attribute in class Meta, i've managed to show only some fields in a template. However, can I indicate which user fields to show? Something as: fields =('personalInfo','user.username') is currently not showing anything. Works with only StudentFields though/ Thanks in advance.

    Read the article

  • Backup Google Calendar programmatically: http://www.google.com/reader/subscriptions/export

    - by Michael
    I'm struggling with writing a python script that automatically grabs the zip fail containing all my google calendars and stores it (as a backup) on my harddisk. I'm using ClientLogin to get an authentication token (and successfully can obtain the token). Unfortunately, i'm unable to retrieve the file at https://www.google.com/calendar/exporticalzip It always asks me for the login credentials again by returning a login page as html (instead of the zip). Here's the critical code: post_data = post_data = urllib.urlencode({ 'auth': token, 'continue': zip_url}) request = urllib2.Request('https://www.google.com/calendar', post_data, header) try: f = urllib2.urlopen(request) result = f.read() except: print "Error" Anyone any ideas or done that before? Or an alternative idea how to backup all my calendars (automatically!)

    Read the article

  • Postfix Whitelist before recipient restrictions

    - by GruffTech
    Alright. Some background. We have an anti-spam cluster trucking about 2-3 million emails per day, blocking somewhere in the range of 99% of spam email from our end users. The underlying SMTP server is Postfix 2.2.10. The "Frontline defense" before mail gets carted off to SpamAssassin/ClamAV/ ect ect, is attached below. ...basic config.... smtpd_recipient_restrictions = reject_unauth_destination, reject_rbl_client b.barracudacentral.org, reject_rbl_client cbl.abuseat.org, reject_rbl_client bl.mailspike.net, check_policy_service unix:postgrey/socket ...more basic config.... As you can see, standard RBL services from various companies, as well as a Postgrey service. The problem is, I have one client (out of thousands) who is very upset that we blocked an important email of theirs. It was sent through a russian freemailer who was currently blocked in two of our three RBL servers. I explained the situation to them, however they are insisting we do not block any of their emails. So i need a method of whitelisting ANY email that comes to domain.com, however i need it to take place before any of the recipient restrictions, they want no RBL or postgrey blocking at all. I've done a bit of research myself, http://www.howtoforge.com/how-to-whitelist-hosts-ip-addresses-in-postfix seemed to be a good guide at first, almost fixing my problem, But i want it to accept based on TO address, not originating server.

    Read the article

  • Django Per-site caching using memcached

    - by Paul
    Hi, So I'm using per-site caching on a project and I've observed the following, which is kind of confusing. When I load a flat page in my browser then change it through admin and then do a refresh (within the cache timeout) there is no change in the page--as expected. However when I stat a new session in a different browser and load the page (still within the timeout) the app is hit instead of the cache, with the Isn't the cache key generated from the URL? it seems that the session state is getting in there somewhere, which is causing a cache miss. Any ideas? thanks MIDDLEWARE_CLASSES = ( 'django.middleware.cache.UpdateCacheMiddleware', 'django.middleware.common.CommonMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.middleware.gzip.GZipMiddleware', 'django.middleware.http.ConditionalGetMiddleware', 'django.middleware.doc.XViewMiddleware', 'ittybitty.middleware.IttyBittyURLMiddleware', 'django.contrib.flatpages.middleware.FlatpageFallbackMiddleware', 'maintenancemode.middleware.MaintenanceModeMiddleware', 'djangodblog.middleware.DBLogMiddleware', 'SSL.middleware.SSLRedirect', #SSL middleware to handle SSL 'django.middleware.cache.FetchFromCacheMiddleware', )

    Read the article

  • Getting started with webserver clustering.

    - by Ernie
    I work for a small ISP, and we host about 250 domains and all the stuff that goes along with that: DNS, mail, spam filtering, and backups. Currently, we have separate DNS servers (two of them) and mail servers (outgoing mail is actually on the secondary DNS server, but was previously on its own server). In the past, this was done as an insurance measure. The last thing we need is for some doofus (usually yours truly) to hose a server, taking out DNS and mail right along with it, or for spammers to jam our incoming SMTP server, preventing outgoing mail from being sent too. In the past, this was a problem, and our servers were set up the way they are now to combat it. However, clustering solutions like Sun's Cobalt RAQ (in days of olde) and Virtualmin appear to cater to an all-in-one approach, then deal with failures through redundant servers. I have avoided this thus far, but we've been using Virtualmin on our web server for a while now, and I'd like to expand into using it for a high availability cluster. Our networking partner has recently built a datacenter that has eliminated all of our other bugaboos like network, cooling, and power issues, so now the only thing left to go wrong is me hosing a server, which happened earlier this month. One of the bigger reasons we've avoided going this route is because our hardware requirements aren't particularly high. One server easily handles all the sites we host (most of them are flat sites). Also, load-balancing routers tend to be expensive and complicated. All that I'm really expecting to do is building a two-node cluster for redundancy so that when I hose a server (however rare that might be), we're not out for 8-12 hours while I rebuild it. What I need to know is how to get started, and if I'm really in a position to bother with this kind of thing at all.

    Read the article

  • dnssec zonesigner ignoring out-of-zone data

    - by jordi12100
    I am trying to configure DNSSec with BIND9 on CentOS 6.4 running DirectAdmin control panel. I am using this tutorial to make it work: https://www.dnssec-tools.org/wiki/index.php/Zonesigner But I can't get it work... When I run this command: zonesigner --genkeys jordikroon.nl.db jordikroon.nl.db.signed I get this error: jordikroon.nl.db:17: ignoring out-of-zone data (jordikroon.nl) jordikroon.nl.db:18: ignoring out-of-zone data (jordikroon.nl) jordikroon.nl.db:22: ignoring out-of-zone data (jordikroon.nl) jordikroon.nl.db:29: ignoring out-of-zone data (jordikroon.nl) jordikroon.nl.db:33: ignoring out-of-zone data (jordikroon.nl) zone jordikroon.nl.db/IN: has no NS records zone jordikroon.nl.db/IN: not loaded due to errors. I can't find anything on the web about this error. This is my zone db file: $TTL 14400 @ IN SOA ns1.ghservers.org. hostmaster.jordikroon.nl. ( 2013090703 14400 3600 1209600 86400 ) jordikroon.nl. 14400 IN NS ns1.ghservers.org. jordikroon.nl. 14400 IN NS ns2.ghservers.org. cp 14400 IN A 85.17.32.228 ftp 14400 IN A 85.17.32.228 jordikroon.nl. 14400 IN A 85.17.32.228 localhost 14400 IN A 127.0.0.1 mail 14400 IN A 85.17.32.228 pop 14400 IN A 85.17.32.228 smtp 14400 IN A 85.17.32.228 www 14400 IN A 85.17.32.228 jordikroon.nl. 14400 IN MX 10 mail jordikroon.nl. 14400 IN TXT "v=spf1 a mx ip4:85.17.32.228 ~all" localhost 14400 IN AAAA ::1 How do I have to fix this? All IN keywords are being ignored. Any help is welcome:-)

    Read the article

  • How to make a post request with REQUESTS package for Python?

    - by jorrebor
    I am trying to use the toggl api. I use Requests instead of Urllib2 for doing my GETs en POSTS. But i am stuck. payload = { "project":{ "name":"Another Project", "billable":False, "workspace":{ "Name":"jorrebor's workspace", "id":213272 }, "automatically_calculate_estimated_workhours":False } } url = "https://www.toggl.com/api/v6/projects.json" r = requests.post(url, data=json.dumps(payload), auth=HTTPBasicAuth('[email protected]', 'mypassword')) Authentication seems to be fine, but the payload format probably isn't. a curl command with the same parameters: curl -v -u heremytoken:api_token -H "Content-type: application/json" -d "{\"project\":{\"billable\":true,\"workspace\":{\"id\":213272},\"name\":\"Another project\",\"automatically_calculate_estimated_workhours\":false}}" -X POST https://www.toggl.com/api/v6/projects.json does work fine. What wrong with my payload? The response is get is: ["Name can't be blank","Workspace can't be blank"] which leads me to conclude that the authentication works but toggl cannot read my json object.

    Read the article

  • Sending a file from my application (Indy/Delphi) to an ASP page and then onto another server (Amazon

    - by user89691
    I have a need to store files on Amazon AWS S3, but in order to isolate the user from the AWS authentication I want to go via an ASP page on my site, which the user will be logged into. So: The application sends the file using the Delphi Indy library TidHTTP.Put (FileStream) routine to the ASP page, along with some authentication stuff (mine, not AWS) on the querystring. The ASP page checks the auth details and then if OK stores the file on S3 using my Amazon account. Problem I have is: how do I access the data coming in from the Indy PUT using JScript in the ASP page and pass it on to S3. I'm OK with AWS signing, etc, it's just the nuts and bolts of connecting the two bits (the incoming request and the outgoing AWS request) ... TIA R

    Read the article

  • Extending User object in Django: User model inheritance or use UserProfile?

    - by Chris
    To extend the User object with custom fields, the Django docs recommend using UserProfiles. However, according to this answer to a question about this from a year or so back: extending django.contrib.auth.models.User also works better now -- ever since the refactoring of Django's inheritance code in the models API. And articles such as this lay out how to extend the User model with custom fields, together with the advantages (retrieving properties directly from the user object, rather than through the .get_profile()). So I was wondering whether there is any consensus on this issue, or reasons to use one or the other. Or even what the Django team currently think?

    Read the article

  • Flex: Result event multiple times

    - by Tom
    Hello everybody!! I am trying to learn Flex and now i have the next code: http://pastebin.com/rZwxF7w1 This code is for my login component. I want to get a special string for encrypting my password. This string is given by my authservice. But when i login i get a multiple times a alert with Done(line 69 in the pastebin code or line 4 in the code on the bottom of this question). But i want that it shows one single time. Does someone know what is wrong with this code? Tom protected function tryLogin():void { encryptStringResult.addEventListener('result', function(event:ResultEvent):void { var encryptString:String = event.result.toString(); Alert.show('Done'); }); encryptStringResult.token = auth.getEncryptString(); }

    Read the article

  • tcp connect hangs on SYN_SENT if something listens, gets CONN_REFUSED if nothing listens

    - by Amos Shapira
    I'm hitting a very strange problem - when I try to connect to one of our servers the client hangs with SYN_SENT if something listens on the port (e.g. Apache on port 80, sshd on port 22 or SMTP on port 25) but if I try to connect to a port on which nothing listens then I immediately get a "CONNECTION refused" error. Connecting to other applications (e.g. rsyncd on some arbitrary port) succeeds. I ran tcpdump on the server and see that the SYN packets arrive to it but it only sends a response if nothing listens on that port. e.g.: on the server I run: # tcpdump -nn port 81 06:49:34.641080 IP 10.x.y.z.49829 server.81: S 3966400723:3966400723(0) win 12320 06:49:34.641118 IP server.81 x.y.z.49829: R 0:0(0) ack 3966400724 win 0 But if I listen on this port, e.g. with nc -4lvvv 81 & Then the output of tcpdump is: 06:44:31.063614 IP x.y.z.45954 server.81: S 3493682313:3493682313(0) win 12320 (and repeats until I stop it) The server is CentOS 5, the client is Ubuntu 11.04, the connection is done between two LAN's over per-user TCP OpenVPN. Connection to other servers on that network do not have a problem. Connecting from the other servers on the same network to that server works fine. Connections from other clients in our office over openvpn is also not a problem. What am I missing? Thanks.

    Read the article

  • bad request error 400 while using python requests.post function

    - by Toussah
    I'm trying to make a simple post request via the requests library of Python and I get a bad request error (400) while my url is supposedly correct since I can use it to perform a get. I'm very new in REST requests, I read many tutorials and documentation but I guess there are still things I don't get so my error could be basic. Maybe a lack of understanding on the type of url I'm supposed to send via POST. Here my code : import requests v_username = "username" v_password = "password" v_headers = {'content-type':'application/rdf+xml'} url = 'https://my.url' params = {'param': 'val_param'} payload = {'data': 'my_data'} r = requests.post(url, params = params, auth=(v_username, v_password), data=payload, headers=v_headers, verify=False) print r I used the example of the requests documentation.

    Read the article

  • Adobe Reader Wants Sensitive Email Details

    - by KDM
    When I run Adobe Reader, it tells me: Either there is no default mail client or the current mail client cannot fulfill the messaging request. Please run Microsoft Outlook and set it as the default mail client. I have a couple of issues with this: 1) It presupposes everyone has Microsoft Office installed. Not all home users have the budget or inclination for this. 2) It presupposes everyone wants Microsoft Outlook to be their default mail client. 3) I have Microsoft Office (incl. Outlook) installed and set as my default mail client. Even if I make it the default mail client from within the Adobe Reader Preferences, that doesn't stop the dialog appearing. 4) I thought I'd give Adobe Reader a new email address in the preferences, just to get it to stop bugging me. I notice, though, that it want's the SMTP and POP addresses and the account password? They have got to be kidding? I just want to view PDF files. How do I get the message to go away without telling Adobe my life story, giving them my mother's maiden name, my favourite movie, my place of birth, the name of my first goldfish and emptying the contents of my wallet for them?

    Read the article

  • Django ORM QuerySet intersection by a field

    - by Sri Raghavan
    These are the (pseudo) models I've got. Blog: name etc... Article: name blog creator etc User (as per django.contrib.auth) So my problem is this: I've got two users. I want to get all of the articles that the two users published on the same blog (no matter which blog). I can't simply filter the Article model by both users, because that would yield the set of Articles created by both users. Obviously not what I want. but can I filter somehow to get all of the articles where a field of the object matches between the two querysets?

    Read the article

  • postfix says mail sent ok, message does not arrive in ISPs inbox? no reject in log?

    - by Nick
    When I send a test message from my mail server to my @bellsouth.net email, The postfix log shows it was sent OK, but the message never arrives in my bellsouth inbox. Shouldn't I get a failure notice or a bounce if At&T is blocking the messages? I'm trying to troubleshoot why some customers aren't getting emails, but if there's nothing in mail.log to say the message is rejected, how do I know which messages were delivered successfully? The log shows: Feb 27 09:02:36 MyHOSTNAME postfix/pickup[26175]: D53A72713E5: uid=0 from=<root> Feb 27 09:02:36 MyHOSTNAME postfix/cleanup[26487]: D53A72713E5: message-id=<[email protected]> Feb 27 09:02:36 MyHOSTNAME postfix/qmgr[5595]: D53A72713E5: from=<[email protected]>, size=878, nrcpt=1 (queue active) Feb 27 09:02:37 MyHOSTNAME postfix/smtp[26490]: D53A72713E5: to=<[email protected]>, relay=gateway-f1.isp.att.net[204.127.217.16]:25, delay=0.57, delays=0.11/0.03/0.23/0.19, dsn=2.0.0, status=sent (250 ok ; id=20120227140036M0700qer4ne) Feb 27 09:02:37 MyHOSTNAME postfix/qmgr[5595]: D53A72713E5: removed The AT&T server accepted the message, right? I happen to have an At&T/Bellsouth email, but I don't have an account with every ISP we send to. I need some way of knowing if a message is getting to its destination or not. Is there any setting in my main.cf file that would affect whether or not we get reject/bounce notices?

    Read the article

  • Bluehost Emails Getting Blocked

    - by colithium
    A site for my client has the run-of-the-mill "website with users" email pattern. Create an account, get an activation email. Get an email when a subscription is expiring, etc. The site is hosted on Bluehost and currently it uses php's mail() function. There isn't much configuration that is allowed (as far as I know). The trouble is, about a third of these emails disappear into the void. They aren't in spam or junk folders, there's no bounce message, they just cease to exist. I've read about Bluehost email troubles but I can't figure out what my options are for fixing it. These aren't marketing emails, ie they have user-specific information contained within them. I suppose if a solution offers a good templating system that would be fine. What are my options? Excerpt of headers when delivered to a Gmail address: Received-SPF: neutral (google.com: 00.000.000.000 is neither permitted nor denied by best guess record for domain of domain@box###.bluehost.com) client-ip=00.000.000.000; DomainKey-Status: good Authentication-Results: mx.google.com; spf=neutral (google.com: 00.000.000.000 is neither permitted nor denied by best guess record for domain of domain@box###.bluehost.com) smtp.mail=domain@box###.bluehost.com; domainkeys=pass [email protected]

    Read the article

  • SSL certificates work fine from command line but fail in script

    - by jrallison
    I'm trying to setup email notifications for my continuous integration server. I have a script which uses nail to send the email when the build works: #!/bin/bash echo "Build Worked!" | nail -A myisp -s 'Build Success' [email protected] When I run this from the command line with sh build-worked, it works and I receive the email. However, when I start the continuous integration server which executes the same script, I get the following error: nail: /opt/bitnami/common/lib/libssl.so.0.9.8: no version information available (required by nail) nail: /opt/bitnami/common/lib/libcrypto.so.0.9.8: no version information available (required by nail) Error with certificate at depth: 0 issuer = /C=ZA/ST=Western Cape/L=Cape Town/O=Thawte Consulting cc/OU=Certification Services Division/CN=Thawte Premium Server CA/[email protected] subject = /C=US/ST=California/L=Mountain View/O=Google Inc/CN=smtp.gmail.com err 20: unable to get local issuer certificate Continue (y/n)? could not initiate SSL/TLS connection: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed . . . message not sent. I must be messing some configuration, any ideas?

    Read the article

  • Is there any way to send Outlook meeting requests from a non-default calendar?

    - by rbeier
    Hi, We have a user with two Outlook accounts. [email protected] is of type Exchange; [email protected] is of type IMAP/SMTP. Both are actually on our Exchange server; but since an Outlook profile can only have one Exchange account, the second one is set up as IMAP. The user would like to send a meeting request from her xyz.com account, so the "from" address appears as [email protected]. Unfortunately that doesn't work. If she creates the meeting in her xyz.com calendar, the meeting request still goes out through her Exchange account, [email protected]. The meeting request "compose message" window has an Account dropdown below the Send button, but this has no effect. Before she sends the invitation, a warning appears: "Responses to this meeting request will not be tallied because this meeting is not in your main Calendar folder. Is this OK?" Is there any workaround for this? We're using Outlook 2007 and Exchange 2003 SP2. Thanks, Richard

    Read the article

  • asp:Login control requests

    - by Dean
    Hi All, ran into an issue, we are using webforms with a site with this dir structure: root: / secure : /securepages/ we only want users who are logged in to access /securepages/. currently we are using the login control, 3.5, forms auth, all is working ok but know we have thrown assl cert into the mix and the issue is that the login control is requesting WebResource.axd?d=XukT0PE1PS-iOKw3RT8Z6g2&t=633834231612265882 from the non secure url e.g. ht tp://www.mysite.com/WebResource.axd?d=XukT0PE1PS-iOKw3RT8Z6g2&t=633834231612265882 . This causes the browser to prompt the user to download unsecure content. I am using some redirecting in the global.asax to handle redirection to https://xx xxxlogin.aspx if login.aspx it requested from http://. thanks

    Read the article

  • Open Id XRDS Discovery

    - by Asciant
    I am working with Open Id, just playing around making a class to interact / auth Open Id's on my site (in PHP). I know there are a few other Libraries (like RPX), but I want to use my own (its good to keep help better understand the protocol and whether its right for me). The question I have relates to the Open Id discovery sequence. Basically I have reached the point where I am looking at using the XRDS doc to get the local identity (openid.identity) from the claimed identity (openid.claimed_id). My question is, do I have to make a cURL request to get the XRDS Location (X-XRDS-location) and then make another cURL request to get the actual XRDS doc?? It seems like with a DUMB request I only make one cURL request and get the Open Id Server, but have to make two to use the XRDS Smart method. Just doesn't seem right, can anyone else give me some info.

    Read the article

  • How are spam e-mails filtered ?

    - by kevindqc
    Hello. I'm just wondering how some e-mails get past the spam filter, and some don't? Everyday I get World of Warcraft phishing emails that get past the filter... For example, here's a phishing email (just the header) I got in my inbox, and not in my junk mail: X-Message-Delivery: Vj0xLjE7dXM9MDtsPTA7YT0wO0Q9MjtTQ0w9Ng== X-Message-Status: n:0 X-SID-PRA: [email protected] X-AUTH-Result: NONE X-Message-Info: M98loaK0Lo27IVRxloyPIZmAwUHKn18nx0o/idLdvGYjK48i19NuvFOnRFYGWE+HdIrNJpi1XaYx0gaAV13cgRnkWSzgHKG1 Received: from blizzard.com ([204.45.59.37]) by SNT0-MC3-F21.Snt0.hotmail.com with Microsoft SMTPSVC(6.0.3790.3959); Sat, 10 Apr 2010 06:38:24 -0700 Received: from hxeabjlh ([192.168.1.165]) (envelope-sender <[email protected]>) by 192.168.1.111 with ESMTP for <[email protected]>; Sat, 10 Apr 2010 08:43:24 -0500 Reply-To: <[email protected]> Sender: [email protected] Message-ID: <DE567AFB9E2F3DD985A2D9A8D12D2917@hxeabjlh> From: "[email protected]" <[email protected]> To: <[email protected]> Subject: World of Warcraft Account Password verification Date: Sat, 10 Apr 2010 21:38:10 +0800 MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----=_NextPart_000_04EE_0137659E.1AA23350" X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5512 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.5512 Return-Path: [email protected] X-OriginalArrivalTime: 10 Apr 2010 13:38:24.0607 (UTC) FILETIME=[17F3A6F0:01CAD8B3] From what I understand, when you send an email with SMTP, you can specify any hostname in the "HELO" command. Here, the spammer specified "blizzard.com". And he sent his email through Hotmail using Outlook Express. I just don't understand how this gets past the spam filter? There's this SPF thing that seems to exist... but it doesn't seem to be used by blizzard? I'm on Windows, and if I use nslookup to look for the TXT records of blizzard.com and worldofwarcraft.com, I don't see a thing.... so blizzard is not using SPF? Why would that be?

    Read the article

  • Postfix / Dovecot email setup not storing email

    - by Nick Duffell
    I'm trying to setup postfix / dovecot on my debian server to use it for a mail server. I set everything up according to a tutorial on the net, and it all seemed OK. I can send emails from it, so SMTP is not a problem, however I cannot receive emails. Looking into the files in /home/nick/mail/ I can see that if I send an email to myself (from the server, to itself) the emails are there, but are put straight into the Deleted Messages folder. I don't know why this is. When I send an email from another mail account (not on this server), the emails are nowhere to be found. Also, looking at the log file /var/log/mail.log all seems to be OK, I get the following when I receive an email, which looks OK to me: Nov 7 22:47:22 nickduffell postfix/local[17825]: 05B1173581A6: to=, relay=local, delay=0.37, delays=0.31/0.02/0/0.03, dsn=2.0.0, status=sent (delivered to mailbox) Any ideas? Thanks EDIT: I should also add that although the emails I send myself are in the Deleted Messages folder, and in my mail client I can see that "Trash" has 3 items, I cannot download them in my mail client...

    Read the article

  • Django syncdb not making tables for my app

    - by Rosarch
    It used to work, and now it doesn't. python manage.py syncdb no longer makes tables for my app. From settings.py: INSTALLED_APPS = ( 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'mysite.myapp', 'django.contrib.admin', ) What could I be doing wrong? The break appeared to coincide with editing this model in models.py, but that could be total coincidence. I commented out the lines I changed, and it still doesn't work. class MyUser(models.Model): user = models.ForeignKey(User, unique=True) takingReqSets = models.ManyToManyField(RequirementSet, blank=True) takingTerms = models.ManyToManyField(Term, blank=True) takingCourses = models.ManyToManyField(Course, through=TakingCourse, blank=True) school = models.ForeignKey(School) # minCreditsPerTerm = models.IntegerField(blank=True) # maxCreditsPerTerm = models.IntegerField(blank=True) # optimalCreditsPerTerm = models.IntegerField(blank=True) UPDATE: When I run python manage.py loadddata initial_data, it gives an error: DeserializationError: Invalid model identifier: myapp.SomeModel Loading this data had worked fine before. This error is thrown on the very first data object in the data file.

    Read the article

  • ASP.NET sending email through exchange problem

    - by Solmead
    I have an exchange 2010 server running on Windows 2008 R2, I also have a remote webserver running Windows 2003 with multiple sites on it (all asp.net mvc 2 sites). I setup a Transport in exchange and all the websites on my remote web server can send email no problem to anyone in the exchange server and to any external domain. Now for my problem. I am having issues with that webserver, so I moved one of the websites to run on my exchange server, it runs well (low hit website) except that email doesn't work from that site. I tried changing the Transport in exchange to add the IP address of the local machine and the 127.0.0.1 addresses and it still isn't sending any email. Any ideas on how to get this working? The remote websites can still send email no problem, the version of the site that I had to move on the remote server can still email, but on the exchange server for that website email does not send. I would guess it is a Transport issue, since it is running on the same server a firewall shouldn't be the issue. I changed the smtp setting in web.config to localhost, and now I do receive email to my account on the exchange server, but I do not receive any emails on outside addresses. To add more description, this is a custom developed asp.net mvc 2 website. And no errors were being generated in the code when sending the email in either case.

    Read the article

  • What is stopping postfix from delivering mail to the local transport agent?

    - by Dark Star1
    I have the following settings ( as grabbed from my postconf -n output) alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix content_filter = smtp-amavis:[127.0.0.1]:10024 inet_interfaces = all mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 maximal_backoff_time = 8000s maximal_queue_lifetime = 7d minimal_backoff_time = 1000s mydestination = $mydomain, localhost.$mydomain, localhost myhostname = //redacted mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname readme_directory = no recipient_delimiter = + relayhost = smtp_helo_timeout = 60s smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_hard_error_limit = 12 smtpd_recipient_limit = 10 smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_local_domain = smtpd_sasl_path = private/auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_soft_error_limit = 3 smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes unknown_local_recipient_reject_code = 450 virtual_alias_maps = mysql:/etc/postfix/mysql_virtual_alias_maps.cf, mysql:/etc/postfix/mysql_virtual_alias_domainaliases_maps.cf virtual_gid_maps = static:8 virtual_mailbox_base = /var/vmail virtual_mailbox_domains = mysql:/etc/postfix/mysql_virtual_domains_maps.cf virtual_mailbox_maps = mysql:/etc/postfix/mysql_virtual_mailbox_maps.cf, mysql:/etc/postfix/mysql_virtual_mailbox_domainaliases_maps.cf virtual_transport = virtual virtual_uid_maps = static:5000 postconf: warning: /etc/postfix/main.cf: unused parameter: virtual_overquota_bounce=yes postconf: warning: /etc/postfix/main.cf: unused parameter: virtual_mailbox_limit_maps=mysql:/etc/postfix/mysql_virtual_mailbox_limit_maps.cf postconf: warning: /etc/postfix/main.cf: unused parameter: virtual_maildir_limit_message=Sorry, the your maildir has overdrawn your diskspace quota, please free up some of spaces of your mailbox try again. postconf: warning: /etc/postfix/main.cf: unused parameter: virtual_create_maildirsize=yes postconf: warning: /etc/postfix/main.cf: unused parameter: virtual_mailbox_extended=yes postconf: warning: /etc/postfix/main.cf: unused parameter: virtual_mailbox_limit_override=yes postconf: warning: /etc/postfix/main.cf: unused parameter: smtpd_relay_restrictions=reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_non_fqdn_recipient, reject_unauth_destination, check_policy_service inet:127.0.0.1:10023, permit I am nwe to mail server configurations but as I understand it from this message: status=deferred (mail transport unavailable) It means it can't deliver to the LDA. I am using postifx 2.9.6 on ubuntu 12.04 with dovecot 2.0.19

    Read the article

< Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >