Search Results

Search found 29930 results on 1198 pages for 'email client'.

Page 260/1198 | < Previous Page | 256 257 258 259 260 261 262 263 264 265 266 267  | Next Page >

  • Visual Studio 2010 Winform Application &ndash; Unable to resolve custom assemblies?

    - by Harish Ranganathan
    Recently I surfaced a problem where, one of my friend had a tough time in getting rid of an assembly reference error.  Despite adding reference to the assembly, while referencing it in code, it was spitting out the “The type or namespace name ‘ASSEMBLYNAME’ could not be found” error.   This was a migration project and owing to the above error, it was throwing another 100 errors. We tried adding reference to the assembly in other projects and it was not even resolving the namespace while typing out in the using section. Upon further digging into the error warnings, it indicated something to do with the .NET Framework targeted i.e. 4.0.  My suspicion grew since the target framework was 4.0 and the assembly should be able to be recognized.  Then, when we checked “Project – “<APPNAME> Properties…”, the issue was with the default target framework which is “.NET Framework 4 Client Profile” By default, Visual Studio 2010 creates Windows Forms App/WPF Apps with the Target Framework set to .NET Framework 4 Client Profile.  This is to minimize the framework size required to be bundled along with the app. Client Profile is new feature since .NET 3.5 SP1 that allows users to package a minified version of .NET Framework that doesn’t include stuff such as ASP.NET, Server programming assemblies and few other assemblies which are typically never used in the Desktop Applications. Since the .NET Framework client profile is a minified version, it doesn’t contain all the assemblies related to Web services and other deprecated assemblies.  However, this application is a migration app and needed some of the references from Services and hence couldn’t run. Once, we changed the Target Framework to .NET Framework 4 instead of the default client profile, the application compiled. Here is link to a very nice article that explains the features of .NET Framework 4 client Profile, the assemblies supported by default etc., http://blogs.msdn.com/b/jgoldb/archive/2010/04/12/what-s-new-in-net-framework-4-client-profile-rtm.aspx Cheers !!

    Read the article

  • New Process For Receiving Oracle Certification Exam Results

    - by Brandye Barrington
    On November 15, 2012, Oracle Certification exam results will be available directly from Oracle's certification portal, CertView. After completing an exam at a testing center, you will login to CertView to access and print your exam scores by selecting the See My New Exam Results Now link or the Print My New Exam Results Now link from the homepage. This will provide access to all certification and exam history in one place through Oracle, providing tighter integration with other activities at Oracle. This change in policy will also increase security around data privacy. AUTHENTICATE YOUR CERTVIEW ACCOUNT NOW One very important step you must take is to authenticate your CertView account BEFORE taking your exam. This way, if there are any issues with authorization, you have time to get these sorted out before testing. Keep in mind that it can take up to 3 business days for a CertView account to be manually authenticated, so completing this process before testing is key! You will need to create a web account at PearsonVUE prior to registering for your exam and you will need to create an Oracle Web Account prior to authenticating your CertView account. The CertView account will be available for authentication within 30 minutes of creating a Pearson VUE web account at certview.oracle.com. GETTING YOUR EXAM RESULTS FROM ORACLE Before taking the scheduled exam, you should authenticate your account at certview.oracle.com using the email address and Oracle Testing ID in your Pearson VUE profile. You will be required to have an Oracle Web Account to authenticate your CertView account. After taking the exam, you will receive an email from Oracle indicating that your exam results are available at certview.oracle.com If you have previously authenticated your CertView account, you will simply click on the link in the email, which will take you to CertView, login and select See My New Exam Results Now. If you have not authenticated your CertView account before receiving this notification email, you will be required to authenticate your CertView account before accessing your exam results. Authentication requires an Oracle Web Account user name and password and the following information from your Pearson VUE profile: email address and Oracle Testing ID. Click on the link in the email to authenticate your CertView account You will be given the option to create an Oracle Web Account if you do no already have one.  After account authentication, you will be able to login to CertView and select See My New Exam Results Now to view your exam results or Print My New Exam Results Now to print your exam results. As always, if you need assistance with your CertView account, please contact Oracle Certification Support. YOUR QUESTIONS ANSWERED More Information FAQ: Receiving Exam Scores FAQ: How Do I Log Into CertView? FAQ: How To Get Exam Results FAQ: Accessing Exam Results in CertView FAQ: How Will I Know When My Exam Results Are Available? FAQ: What If I Don't Get An Exam Results Email Alert? FAQ: How To Download and Print Exam Score Reports FAQ: What If I Think My Exam Results Are Wrong In CertView? FAQ: Is Oracle Changing The Way That Exams Are Scored?

    Read the article

  • LinkedIn API returning extra/incorrect login prompt

    - by Paul Osetinsky
    I have a Rails application running the omniauth-linkedin gem and linkedin gem (essentialy an API wrapper). When a user logs in, they receive a primary login prompt that displays to them the correct scopes (FULL PROFILE and EMAIL ADDRESS), as below: However, after they log in, they get another login prompt that should not come up, and that ignores the initial scope request. It tells them that LinkedIN is only requesting their PROFILE OVERVIEW, which is incorrect: The problem must lie in my auth_controller, and I think it has do to with the url that is created in one of the authentication stages (definitely right after the user enters their LinkedIn authentication credentials). Here is my auth_controller: require 'linkedin' class AuthController < ApplicationController def auth client = LinkedIn::Client.new(ENV['LINKEDIN_KEY'], ENV['LINKEDIN_SECRET']) request_token = client.request_token(:oauth_callback => "http://#{request.host_with_port}/callback") session[:rtoken] = request_token.token session[:rsecret] = request_token.secret redirect_to client.request_token.authorize_url end def callback client = LinkedIn::Client.new(ENV['LINKEDIN_KEY'], ENV['LINKEDIN_SECRET']) if session[:atoken].nil? pin = params[:oauth_verifier] atoken, asecret = client.authorize_from_request(session[:rtoken], session[:rsecret], pin) session[:atoken] = atoken session[:asecret] = asecret @user = current_user @user.uid = client.profile(:fields => ["id"]).id flash.now[:success] = 'Signed in with LinkedIn.' else client.authorize_from_access(session[:atoken], session[:asecret]) @user.uid = client.profile(:fields => ["id"]).id flash.now[:success] = 'Signed in with LinkedIn.' end @user = current_user @user.save redirect_to current_user end end Just in case, here is my omniauth.rb file that states the scopes I am requesting for my application: Rails.application.config.middleware.use OmniAuth::Builder do provider :linkedin, ENV['LINKEDIN_KEY'], ENV['LINKEDIN_SECRET'], :scope => 'r_fullprofile r_emailaddress', :fields => ['id', 'email-address', 'first-name', 'last-name', 'headline', 'industry', 'picture-url', 'public-profile-url', 'location', 'positions', 'educations'] end Can't figure out how to get rid of that second unnecessary and misleading prompt from LinkedIn and would appreciate any guidance! Thank you.

    Read the article

  • Django's self.client.login(...) does not work in unit tests

    - by thebossman
    I have created users for my unit tests in two ways: 1) Create a fixture for "auth.user" that looks roughly like this: { "pk": 1, "model": "auth.user", "fields": { "username": "homer", "is_active": 1, "password": "sha1$72cd3$4935449e2cd7efb8b3723fb9958fe3bb100a30f2", ... } } I've left out the seemingly unimportant parts. 2) Use 'create_user' in the setUp function (although I'd rather keep everything in my fixtures class): def setUp(self): User.objects.create_user('homer', '[email protected]', 'simpson') Note that the password is simpson in both cases. I've verified that this info is correctly being loaded into the test database time and time again. I can grab the User object using User.objects.get. I can verify the password is correct using 'check_password.' The user is active. Yet, invariably, self.client.login(username='homer', password='simpson') FAILS. I'm baffled as to why. I think I've read every single Internet discussion pertaining to this. Can anybody help? The login code in my unit test looks like this: login = self.client.login(username='homer', password='simpson') self.assertTrue(login) Thanks.

    Read the article

  • Is there an HTML attribute to tell smartphone keyboards to show special email keys?

    - by slolife
    I notice that when using my touch-screen smartphone (no physical keyboard) that when an app asks for an email address to be entered in a textbox, the on screen keyboard is modified slightly to provide specialized keys that enter blocks of text, like '.com' or push some characters to the foreground key, like '@'. Is there an HTML attribute or style that I can add to my HTML input boxes that will tell the smartphone/browser to provide these specialized keys?

    Read the article

  • Puppet Agent still able to connect to Master after certificate revocation

    - by chris
    In summary: Client connects for the first time and requests cert; on the Master, puppetca -s client is executed; Client gets the cert and completes the run successfully. Fine. But now: on the Master, puppetca -c client is executed and client's cert is not in the cert list anymore; Client connects again and can perform the run as usual; Restarting puppetmasterd doesn't solve the issue. How can I prevent client to connect once its cert has been revoked? Thanks in advance

    Read the article

  • Best pratice: How do I implement a list that can be rendered both server-side and client-side?

    - by André Pena
    Technologies involved: ASP.NET Web-forms Javascript (jQuery for instance) Case To make it clearer let's give the Stackoverflow authors list as an example. This list can be manipulated at client-side. I can search, page and so forth. So obviously we would need to call jQuery.ajax to retrieve the HTML of each page given a search. Alright. Now this leaves me with the first question: What is the best way to render the response for the jQuery.ajax at server-side? I can't use templates I suppose, so the most obvious solution I think is to create the HTML tags as server-controls and render them as the result of an ASHX request? Is this is best approach? Nice. That solved we have yet another problem: When the user first enters the Authors List the first list page should already come from the server completely rendered alright? Of course we could render the first page as well as an ajax call but I don't think it's better. This time I CAN use templates to render the list but this template couldn't be reused in case 1. What do I do? Now the final question: Now we have 2 rendering strategies: 1) Client and 2) Server. How do I reuse code for the 2 renderings? What are the best pratices for solving these problems?

    Read the article

  • Online voice chat: Why client-server model vs. peer-to-peer model?

    - by sstallings
    I am adding online voice chat to a Silverlight app. I've been reviewing current apps, services and SDKs found thru online searches and forums. I'm finding that the majority of these implement a client-server (C/S) model and I'm trying to understand why that model versus a peer-to-peer (PTP) model. To me PTP would be preferable because going direct between peers would be more efficient (fewer IP hops and no processing along the way by a server computer) and no need for a server and its costs and dependencies. I found some products offer the ability to switch from PTP to C/S if the PTP proves insufficient. As I thought more about it, I could see that C/S could be better if there are more than two peers involved in a conversation, then the server (supposedly with more bandwidth) could do a better job of relaying each peers outgoing traffic to the multiple other peers. In C/S many-to-many voice chatting, each peer's upstream broadband (which is where the bottleneck inherently is) would only have to carry each item of voice traffic once, then the server would use its superior bandwidth to relay the message to the multiple other peers. But, in a situation with one-on-one voice chatting it seems that PTP would be best. A server would not reduce each of the two peer's bandwidth requirements and would only add unnecessary overhead, dependency and cost. In one-on-one voice chatting: Am I mistaken on anything above? Would peer-to-peer be best? Would a server provide anything of value that could not be provided by a client-only program? Is there anything else that I should be taking into consideration? And lastly, can you recommend any Silverlight PTP or C/S voice chat products? Thanks in advance for any info.

    Read the article

  • (solved) jQuery click and drag/scroll window: jagged movement

    - by Josh
    Edit: derp, using pageX/Y instead of clientX/Y -- apparently scrollBy expects input with that offset rather than the other. Jaggy movement gone. I am getting jagged movement when doing small scroll increments using the following bindings. Can anyone point me in the right direction for how to smooth this out? FYI, its intermittent. It seems like, if I click and hold for a second, then drag at a decent speed there are no problems. Edit: What the hell? I get this output on debug... obvious jog backwards and forwards. This will happen in succession and seems to have no correlation with the mouse, other than the mouse is moving. x 398 : 403 y 374 : 377 x 403 : 399 y 377 : 374 x 399 : 404 y 374 : 377 Josh sococo.client.panMap = function(e){ e.preventDefault(); var movex = sococo.client.currX - e.pageX ; var movey = sococo.client.currY - e.pageY; console.log( sococo.client.currX +" : " + e.pageX ); window.scrollBy(movex,movey); sococo.client.currY = e.pageY; sococo.client.currX = e.pageX; } $(document).mousedown( function(e){ e.preventDefault(); sococo.client.currX = e.pageX; sococo.client.currY = e.pageY; $(document).bind( "mousemove", sococo.client.panMap ); }); $(document).mouseup( function(e){ e.preventDefault(); $(document).unbind( "mousemove", sococo.client.panMap ); });

    Read the article

  • Timeout Considerations for Solicit Response

    - by Michael Stephenson
    Background One of the clients I work with had been experiencing some issues for a while surrounding web service timeouts.  It's been a little challenging to work through the problems due to limitations in the diagnostic information available from one of the applications, but I learned some interesting things while troubleshooting the problem which don't seem to have been discussed much in the community so I thought I'd share my findings. In the scenario we have BizTalk trying to make calls to a .net web service which was exposed as a WSE 2 endpoint.  In the process BizTalk will try to make a large number of concurrent web service calls to the application, and the backend application has more than enough infrastructure and capability to handle the load. We have configured the <ConnectionManagement> section of the BizTalk configuration file to support up to 100 concurrent connections from each of our 2 BizTalk send servers to the web servers of the application. The problem we were facing was that the BizTalk side was reporting a significant number of timeouts when calling the web service.   One of the biggest issues was the challenge of being able to correlate a message from BizTalk to the IIS log in the .net application and the custom logs in the application especially when there was a fairly large number of servers hosting the web services.  However the key moment came when we were able to identify a specific call which had taken 40 seconds to execute on the server (yes a long time I know but that's a different story!).  Anyway we were able to identify that this had timed out on the BizTalk side.  Based on the normal 2 minute timeout we knew something unexpected was going on. From here I decided to do some experimentation and I wanted to start outside of BizTalk because my hunch was this was not a BizTalk behaviour but something which was being highlighted by BizTalk because of our large load.     Server-side - Sample Web Service To begin with I created a sample web service.  Nothing special just a vanilla asmx web service hosted in IIS6 on Windows 2003 Standard Edition.  The web service is just a hello world style web service as shown in the below picture.  The only key feature is that the server side web method has a 30 second sleep in it and will trace out some information before and after the thread is set to sleep.      In the configuration for this web service there again is nothing special it's pretty much the most plain simple web service you could build. Client-Side To begin looking at what was happening with our example I created a number of different ways to consume the web service. SoapHttpClientProtocol Example I created a small application which would use a normal proxy generated to call the web service.  It would iterate around a loop and make calls using the begin/end methods so I can do this asynchronously.  I would do a loop of 20 calls with the ConnectionManager configuration section supporting only 5 concurrent connections to the server.     <connectionManagement> <remove address="*"/> <add address = "*" maxconnection = "12" /> <add address = "http://<ServerName>" maxconnection = "5" />                         </connectionManagement> </system.net>     The below picture shows an example of the service calling code, key points are: I have configured the timeout of 40 seconds for the proxy I am using the asynchronous methods on the proxy to call the web service         The Test I would run the client and execute 21 calls to the web service.   The Results  Below is the client side trace showing what's happening on the client. In the below diagram is the web service side trace showing what's happening on the server Some observations on the results are: All of the calls were successful from the clients perspective You could see the next call starting on the server as soon as the previous one had completed Calls took significantly longer than 40 seconds from the start of our call to the return. In fact call 20 took 2 minutes and 30 seconds from the perspective of my code to execute even though I had set the timeout to 40 seconds     WSE 2 Sample In the second example I used the exact same code to call the web service again with a single exception that I modified the web service proxy to derive from WebServiceClient protocol which is part of WSE 2 (using SP3).  The below picture shows the basic code and the key points are: I have configured the timeout of 40 seconds for the proxy I am using the asynchronous methods on the proxy to call the web service        The Test This test would execute 21 calls from the client to the web service.   The Results  The below trace is from the client side: The below trace is from the server side:   Some observations on the trace results for this scenario are: With call 4 if you look at the server side trace it did not start executing on the server for a number of seconds after the other 4 initial calls which were accepted by the server. I re-ran the test and this happened a couple of times and not on most others so at this point I'm just putting this down to something unexpected happening on the development machine and we will leave this observation out of scope of this article. You can see that the client side trace statement executed almost immediately in all cases All calls after the initial few calls would timeout On the client side the calls that did timeout; timed out in a longer duration than the 40 seconds we set as the timeout You can see that as calls were completing on the server the next calls were starting to come through The calls that timed out on the client did actually connect to the server and their server side execution completed successfully     Elaboration on the findings Based on the above observations I have drawn the below sequence diagram to illustrate conceptually what is happening.  Everything except the final web service object is on the client side of the call. In the diagram below I've put two notes on the Web Service Proxy to show the two different places where the different base classes seem to start their timeout counters. From the earlier samples we can work out that the timeout counter for the WSE web service proxy starts before the one for the SoapHttpClientProtocol proxy and the WSE one includes the time to get a connection from the pool; whereas the Soap proxy timeout just covers the method execution. One interesting observation is if we rerun the above sample and increase the number of calls from 21 to 100,000 then for the WSE sample we will see a similar pattern where everything after the first few calls will timeout on the client as soon as it makes a connection to the server whereas the soap proxy will happily plug away and process all of the calls without a single timeout. I have actually set the sample running overnight and this did happen. At this point you are probably thinking the same thoughts I was at the time about the differences in behaviour and which is right and why are they different? I'm not sure there is a definitive answer to this in the documentation, or at least not that I could find! I think you just have to consider that they are different and they could have different effects depending on your messaging solution. In lots of situations this is just not an issue as your concurrent requests doesn't get to the situation where you end up throttling the web service calls on the client side, however this is definitely more common with an integration broker such as BizTalk where you often have high throughput requirements.  Some of the considerations you should make Based on this behaviour you should be aware of the following: In a .net application if you are making lots of concurrent web service calls from an application in an asynchronous manner your user may thing they are experiencing poor performance but you think your web service is working well. The problem could be that the client will have a default of 2 connections to remote servers so you should bear this in mind When you are developing a BizTalk solution or a .net solution with the WSE 2 stack you may experience timeouts under load and throttling the number of connections using the max connections element in the configuration file will not help you For an application using WSE2 or SoapHttpClientProtocol an expired timeout will not throw an error until after a connection to the server has been made so you should consider this in your transaction and durability patterns     Our Work Around In the short term for our specific scenario we know that we can handle this by just increasing our timeout value.  There is only a specific small window when we get lots of concurrent traffic that causes this scenario so we should be able to increase the timeout to take into consideration the additional client side wait, and on the odd occasion where we do get a timeout the BizTalk send port retry will handle this. What was causing our original problem was that for that short window we were getting a lot of retries which significantly increased the load on our send servers and highlighted the issue.  Longer Term Solution As a longer term solution this really gives us more ammunition to argue a migration to WCF. The application we are calling has some factors which limit the protocols we can use but with WCF we would have more control on the various timeout options because in WCF you can configure specific parts of the timeout. Summary I've had this blog post on my to do list for ages but hopefully it will be useful to some people to just understand this behaviour and to possibly help you with some performance issues you may have. I do not believe there is too much in the way of documentation particularly around WSE2 and ASMX in this area so again another bit of ammunition for migrating to WCF. I'll try to do a follow up post with the sample for WCF to show how this changes things.

    Read the article

  • Configure Windows Routes for VPN

    - by Florin Sabau
    I have a Virtual PC/VMWare machine that runs Windows Server 2003. This virtual machine uses an IPSec VPN client program to connect to a remote network. I configured the virtual machine to have 2 NICs: NAT - to be used by the VPN Client to access the remote network Host only - to be able to access the virtual machine from the host The reason I have this setup is because I want to be able to access some remote network from the host machine. I could've installed the VPN client on the host machine, but the host runs Windows 7 and the client doesn't support it. The problem: although the virtual machine is normally reachable (ping + http access), as soon as the VPN client is started, neither of the NIC addresses are reachable anymore. I'm wondering if it is a routing problem that needs to be addressed? How do routing/VPN client connection affect the ability of the server to respond to client requests from the host?

    Read the article

  • IPC: Communication between Qt4 and MONO processes (on linux)

    - by elcuco
    I have to connect a Qt4 application to a mono Application. The current proof of concept uses network sockets (which is nice, I can debug using nc on the command line). But I am open to new suggestions. What are my alternatives? Edit: The original application stack is split into two parts: server + client. The client is supposed to show pictures and videos. Since we found that this is not possible in a sane way in Mono, we split the client into two parts: server - client - GUI In the original implementation the client+GUI were the same application. Now client is in C# (running on Mono), and the GUI is Qt4. Rewriting the client in Qt4 is not an option. Right now the communication between the client and the GUI is been done using TCP sockets through localhost. I am looking for better implementations.

    Read the article

  • SmtpClient, send email through smtp.gmail.com, but From another account.

    - by dynback.com
    I wonna send email through gmail smtp, but users should see my corporative "From" SmtpClient smtp = new SmtpClient("smtp.gmail.com", 587); smtp.EnableSsl = true; smtp.Credentials = new NetworkCredential("[email protected]", "pass", "mail.dynback.com"); I am getting SmtpException: "The SMTP server requires a secure connection or the client was not authenticated. The server response was: 5.5.1 Authentication Required" I heard its all possible and called "Relay", but I am not sure, do i need to put somehow google credentials?

    Read the article

  • Failing report subscriptions

    - by DavidWimbush
    We had an interesting problem while I was on holiday. (Why doesn't this stuff ever happen when I'm there?) The sysadmin upgraded our Exchange server to Exchange 2010 and everone's subscriptions stopped. My Subscriptions showed an error message saying that the email address of one of the recipients is invalid. When you create a subscription, Reporting puts your Windows user name into the To field and most users have no permissions to edit it. By default, Reporting leaves it up to exchange to resolve that into an email address. This only works if Exchange is set up to translate aliases or 'short names' into email addresses. It turns out this leaves Exchange open to being used as a relay so it is disabled out of the box. You now have three options: Open up Exchange. That would be bad. Give all Reporting users the ability to edit the To field in a subscription. a) They shouldn't have to, it should just work. b) They don't really have any business subscribing anyone but themselves. Fix the report server to add the domain. This looks like the right choice and it works for us. See below for details. Pre-requisites: A single email domain name. A clear relationship between the Windows user name and the email address. eg. If the user name is joebloggs, then joebloggs@domainname needs to be the email address or an alias of it. Warning: Saving changes to the rsreportserver.config file will restart the Report Server service which effectively takes Reporting down for around 30 seconds. Time your action accordingly. Edit the file rsreportserver.config (most probably in the folder ..\Program Files[ (x86)]\Microsoft SQL Server\MSRS10_50[.instancename]\Reporting Services\ReportServer). There's a setting called DefaultHostName which is empty by default. Enter your email domain name without the leading '@'. Save the file. This domain name will be appended to any destination addresses that don't have a domain name of their own.

    Read the article

  • How to provide hyperlink in email pointing to a specific method inside gwt app (but not main page)

    - by subh
    My GWT app has a search result with orderid column as hyperlink. On clicking, it opens up another tab which shows the details. I want to expose this particular functionality externally say in an email http://www.myapp.com/XYZApp.html?orderid=1234 so that user can directly go to the details page after login to the App. In JSP world, it was pretty straightforward. Is it possible in GWT given that the call to show up the details page is not in the main GWT module (XYZApp.html)

    Read the article

  • Django: TypeError: 'str' object is not callable, referer: http://xxx

    - by user705415
    I've been wondering why when I set the settings.py of my django project 'arvindemo' debug = Flase and deploy it on Apache with mod_wsgi, I got the 500 Internal Server Error. Env: Django 1.4.0 Python 2.7.2 mod_wsgi 2.8 OS centOS Here is the recap: Visit the homepage, go to sub page A/B/C/D, and fill some forms, then submit it to the Apache server. Once click 'submit' button, I will get the '500 Internal Server Error', and the error_log listed below(Traceback): [Tue Apr 10 10:07:20 2012] [error] [client 122.198.133.250] Traceback (most recent call last): [Tue Apr 10 10:07:20 2012] [error] [client 122.198.133.250] File "/opt/python2.7/lib/python2.7/site-packages/django/core/handlers/wsgi.py", line 241, in __call__ [Tue Apr 10 10:07:20 2012] [error] [client 122.198.133.250] response = self.get_response(request) [Tue Apr 10 10:07:20 2012] [error] [client 122.198.133.250] File "/opt/python2.7/lib/python2.7/site-packages/django/core/handlers/base.py", line 179, in get_response [Tue Apr 10 10:07:20 2012] [error] [client 122.198.133.250] response = self.handle_uncaught_exception(request, resolver, sys.exc_info()) [Tue Apr 10 10:07:20 2012] [error] [client 122.198.133.250] File "/opt/python2.7/lib/python2.7/site-packages/django/core/handlers/base.py", line 224, in handle_uncaught_exception [Tue Apr 10 10:07:20 2012] [error] [client 122.198.133.250] if resolver.urlconf_module is None: [Tue Apr 10 10:07:20 2012] [error] [client 122.198.133.250] File "/opt/python2.7/lib/python2.7/site-packages/django/core/urlresolvers.py", line 323, in urlconf_module [Tue Apr 10 10:07:20 2012] [error] [client 122.198.133.250] self._urlconf_module = import_module(self.urlconf_name) [Tue Apr 10 10:07:20 2012] [error] [client 122.198.133.250] File "/opt/python2.7/lib/python2.7/site-packages/django/utils/importlib.py", line 35, in import_module [Tue Apr 10 10:07:20 2012] [error] [client 122.198.133.250] __import__(name) [Tue Apr 10 10:07:20 2012] [error] [client 122.198.133.250] File "/opt/web/django/arvindemo/arvindemo/../arvindemo/urls.py", line 23, in <module> [Tue Apr 10 10:07:20 2012] [error] [client 122.198.133.250] url(r'^submitPage$', name=submitPage), [Tue Apr 10 10:07:20 2012] [error] [client 122.198.133.250] TypeError: url() takes at least 2 arguments (2 given) When using django runserver, I set arvindemo.settings debug = True, everything is OK. But things changed once I set debug = Flase. Here is my views.py from django.http import HttpResponseRedirect from django.http import HttpResponse, HttpResponseServerError from django.shortcuts import render_to_response import datetime, string from user_info.models import * from django.template import Context, loader, RequestContext import settings def hello(request): return HttpResponse("hello girl") def helpPage(request): return render_to_response('kktHelp.html') def server_error(request, template_name='500.html'): return render_to_response(template_name, context_instance = RequestContext(request) ) def page404(request): return render_to_response('404.html') def submitPage(request): post = request.POST Mall = 'goodsName' Contest = 'ojs' Presentation = 'addr' WeatherReport = 'city' Habit = 'task' if Mall in post: return submitMall(request) elif Contest in post: return submitContest(request) elif Presentation in post: return submitPresentation(request) elif Habit in post: return submitHabit(request) elif WeatherReport in post: return submitWeather(request) else: return HttpResponse(request.POST) return HttpResponseRedirect('404') def submitXXX(): ..... def xxxx(): .... Here comes the urls.py from django.conf.urls import patterns, include, url from views import * from django.conf import settings handler500 = 'server_error' urlpatterns = patterns('', url(r'^hello/$', hello), # hello world url(r'^$', homePage), url(r'^time/$', getTime), url(r'^time/plus/(\d{1,2})/$', hoursAhead), url(r'^Ttime/$', templateGetTime), url(r'^Mall$', templateMall), url(r'^Contest$', templateContest), url(r'^Presentation$', templatePresentation), url(r'^Habit$', templateHabit), url(r'^Weather$', templateWeather), url(r'^Help$', helpPage), url(r'^404$', page404), url(r'^500$', server_error), url(r'^submitPage$', submitPage), url(r'^submitMall$', submitMall), url(r'^submitContest$', submitContest), url(r'^submitPresentation$', submitPresentation), url(r'^submitHabit$', submitHabit), url(r'^submitWeather$', submitWeather), url(r'^terms$', terms), url(r'^privacy$', privacy), url(r'^thanks$', thanks), url(r'^about$', about), url(r'^static/(?P<path>.*)$','django.views.static.serve',{'document_root':settings.STATICFILES_DIRS}), ) I'm sure there is no syntax error in my django project,cause when I use django runserver, everything is fine. Anyone can help ? Best regards

    Read the article

  • Normalization of database for timesheet tool and ensure data integrity

    - by fireeyedboy
    I'm creating a timesheet application. I have the following entities (amongst others): Company Employee = an employee associated with a company Client = a client associated with a company So far I have the following (abbreviated) database setup: Company - id - name Employee - id - companyId (FK to Company.id) - name Client - id - companyId (FK to Company.id) - name Now, I want an employee to be associated with a client, but only if that client is associated with the company the employee works for. How would you guarantee this data integrity on a database level? Or should I just depend on the application to guarantee this data integrity? I thought about creating a many to many table like this: EmployeeClient - employeeId (FK to Employee.id) - companyId \ (combined FK to Client.companyId, Client.id) - clientId / Thus, when I insert a client for an employee along with the employee's company id, the database should prevent this when the client is not associated with the employee's company id. Does this make sense? Because this still doesn't guarantee the employee is associated with the company. How do you deal with these things? UPDATE The scenario is as followed: A company has multiple employees. Employees will only be linked to one company. A company has multiple clients also. Clients will only be linked to one company. (Company is a sandbox, so to speak). An employee of a company can be linked to a client of it's company, but only if the client is part of the company's clientele. In other words: The application will allow a company to create/add employees and create/add clients (hence the companyId FK in the Employee and Client tables). Next, the company will be allowed to assign certain clients to certain of it's employees (EmployeeClient table). Imagine an employee working on projects for a few clients for which s/he can write billable hours, but the employee must not be allowed to write billable hours for clients they are not assigned to by their employer (the company). So, employees will not automatically have access to all their company's clients, but only to those that the company has selected for them. Hopefully this has shed some more light on the matter.

    Read the article

  • How do I send email over SMTP with SSL using Java client?

    - by Ido
    I need to send email over smtp with ssl using java client. I'm not sure how to do that. If I have my server certificate installed on my Windows machine, how do I use it? If I want it to work on a non-Windows machine, do I need to get the certificates in a different way? BTW: If the SMTP server that I use is using SSL, can I be sure that it will send the mail to the recipient using SSL?

    Read the article

  • Securing credentials passed to web service

    - by Greg Smith
    I'm attempting to design a single sign on system for use in a distributed architecture. Specifically, I must provide a way for a client website (that is, a website on a different domain/server/network) to allow users to register accounts on my central system. So, when the user takes an action on a client website, and that action is deemed to require an account, the client will produce a page (on their site/domain) where the user can register for a new account by providing an email and password. The client must then send this information to a web service, which will register the account and return some session token type value. The client will need to hash the password before sending it across the wire, and the webservice will require https, but this doesn't feel like it's safe enough and I need some advice on how I can implement this in the most secure way possible. A few other bits of relevant information: Ideally we'd prefer not to share any code with the client We've considered just redirecting the user to a secure page on the same server as the webservice, but this is likely to be rejected for non-technical reasons. We almost certainaly need to salt the password before hashing and passing it over, but that requires the client to either a) generate the salt and communicate it to us, or b) come and ask us for the salt - both feel dirty. Any help or advice is most appreciated.

    Read the article

  • C# Asynchronous Network IO and OutOfMemoryException

    - by The.Anti.9
    I'm working on a client/server application in C#, and I need to get Asynchronous sockets working so I can handle multiple connections at once. Technically it works the way it is now, but I get an OutOfMemoryException after about 3 minutes of running. MSDN says to use a WaitHandler to do WaitOne() after the socket.BeginAccept(), but it doesn't actually let me do that. When I try to do that in the code it says WaitHandler is an abstract class or interface, and I can't instantiate it. I thought maybe Id try a static reference, but it doesnt have teh WaitOne() method, just WaitAll() and WaitAny(). The main problem is that in the docs it doesn't give a full code snippet, so you can't actually see what their "wait handler" is coming from. its just a variable called allDone, which also has a Reset() method in the snippet, which a waithandler doesn't have. After digging around in their docs, I found some related thing about an AutoResetEvent in the Threading namespace. It has a WaitOne() and a Reset() method. So I tried that around the while(true) { ... socket.BeginAccept( ... ); ... }. Unfortunately this makes it only take one connection at a time. So I'm not really sure where to go. Here's my code: class ServerRunner { private Byte[] data = new Byte[2048]; private int size = 2048; private Socket server; static AutoResetEvent allDone = new AutoResetEvent(false); public ServerRunner() { server = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); IPEndPoint iep = new IPEndPoint(IPAddress.Any, 33333); server.Bind(iep); Console.WriteLine("Server initialized.."); } public void Run() { server.Listen(100); Console.WriteLine("Listening..."); while (true) { //allDone.Reset(); server.BeginAccept(new AsyncCallback(AcceptCon), server); //allDone.WaitOne(); } } void AcceptCon(IAsyncResult iar) { Socket oldserver = (Socket)iar.AsyncState; Socket client = oldserver.EndAccept(iar); Console.WriteLine(client.RemoteEndPoint.ToString() + " connected"); byte[] message = Encoding.ASCII.GetBytes("Welcome"); client.BeginSend(message, 0, message.Length, SocketFlags.None, new AsyncCallback(SendData), client); } void SendData(IAsyncResult iar) { Socket client = (Socket)iar.AsyncState; int sent = client.EndSend(iar); client.BeginReceive(data, 0, size, SocketFlags.None, new AsyncCallback(ReceiveData), client); } void ReceiveData(IAsyncResult iar) { Socket client = (Socket)iar.AsyncState; int recv = client.EndReceive(iar); if (recv == 0) { client.Close(); server.BeginAccept(new AsyncCallback(AcceptCon), server); return; } string receivedData = Encoding.ASCII.GetString(data, 0, recv); //process received data here byte[] message2 = Encoding.ASCII.GetBytes("reply"); client.BeginSend(message2, 0, message2.Length, SocketFlags.None, new AsyncCallback(SendData), client); } }

    Read the article

  • How can i get SSO for alfresco on windows-7 to work?

    - by Maarten
    domain AD on windows 2008 R2, linux server alfresco 3.4c, windows-7 client. I'm trying to get automatically logged into alfresco from the windows-7 client. I've looked with wireshark to see what happens: 1. Client goes to /alfresco 2. Server sends Redirect to page 3. Client goes to Redirected page 4. Server sends a WWW-Authenticate: Negotiate header 5. Client DOES NOT respond to this how can i configure the windows-7 client (or the AD domain) so that the client will in fact engage with the SPNEGO protocol? instead of just asking for user credentials? (the user is logged in through kerberos in the domain.)

    Read the article

  • How does one retrieve the email address of a user with GData?

    - by sblom
    I'm trying to use GData to retrieve the email address, real name, and profile URL of the user that just authorized my site using Google OAuth. We know how to request it using Google's OpenID flow, but the OpenID flow has the severe limitation that we have to ask for a Google Apps user's domain before we know where to send them to log in. At least using OAuth (or even AuthSub), the user gets prompted for which of their Google accounts to log in.

    Read the article

< Previous Page | 256 257 258 259 260 261 262 263 264 265 266 267  | Next Page >