Search Results

Search found 9658 results on 387 pages for 'authentication provider'.

Page 339/387 | < Previous Page | 335 336 337 338 339 340 341 342 343 344 345 346  | Next Page >

  • How to post a SOAP request from a browser?

    - by understack
    Is it possible to send a SOAP request directly from a browser to service provider? And then parse the output in javascript to show the result? For example, if I've a SOAP request like this : POST /InStock HTTP/1.1 Host: www.example.org Content-Type: application/soap+xml; charset=utf-8 Content-Length: nnn <?xml version="1.0"?> <soap:Envelope xmlns:soap="http://www.w3.org/2001/12/soap-envelope" soap:encodingStyle="http://www.w3.org/2001/12/soap-encoding"> <soap:Body xmlns:m="http://www.example.org/stock"> <m:GetStockPrice> <m:StockName>IBM</m:StockName> </m:GetStockPrice> </soap:Body> </soap:Envelope> Then can I get the 'IBM stock price' by clicking on a link on a web page? And show result after xml processing. EDIT Can I send the whole envelope as POST data?

    Read the article

  • Updating MS Access Database from Datagridview

    - by Peter Roche
    I am trying to update an ms access database from a datagridview. The datagridview is populated on a button click and the database is updated when any cell is modified. The code example I have been using populates on form load and uses the cellendedit event. private OleDbConnection connection = null; private OleDbDataAdapter dataadapter = null; private DataSet ds = null; private void Form2_Load(object sender, EventArgs e) { string connetionString = "Provider=Microsoft.ACE.OLEDB.12.0;Data Source='C:\\Users\\Peter\\Documents\\Visual Studio 2010\\Projects\\StockIT\\StockIT\\bin\\Debug\\StockManagement.accdb';Persist Security Info=True;Jet OLEDB:Database Password="; string sql = "SELECT * FROM StockCount"; connection = new OleDbConnection(connetionString); dataadapter = new OleDbDataAdapter(sql, connection); ds = new DataSet(); connection.Open(); dataadapter.Fill(ds, "Stock"); connection.Close(); dataGridView1.DataSource = ds; dataGridView1.DataMember = "Stock"; } private void addUpadateButton_Click(object sender, EventArgs e) { } private void dataGridView1_CellEndEdit(object sender, DataGridViewCellEventArgs e) { try { dataadapter.Update(ds,"Stock"); } catch (Exception exceptionObj) { MessageBox.Show(exceptionObj.Message.ToString()); } } The error I receive is Update requires a valid UpdateCommand when passed DataRow collection with modified rows. I'm not sure where this command needs to go and how to reference the cell to update the value in the database.

    Read the article

  • How can I create a SQL table using excel columns?

    - by Phsika
    I need to help to generate column name from excel automatically. I think that: we can do below codes: CREATE TABLE [dbo].[Addresses_Temp] ( [FirstName] VARCHAR(20), [LastName] VARCHAR(20), [Address] VARCHAR(50), [City] VARCHAR(30), [State] VARCHAR(2), [ZIP] VARCHAR(10) ) via C#. How can I learn column name from Excel? private void Form1_Load(object sender, EventArgs e) { ExcelToSql(); } void ExcelToSql() { string connectionString = @"Provider=Microsoft.ACE.OLEDB.12.0;Data Source=C:\Source\MPD.xlsm;Extended Properties=""Excel 12.0;HDR=YES;"""; // if you don't want to show the header row (first row) // use 'HDR=NO' in the string string strSQL = "SELECT * FROM [Sheet1$]"; OleDbConnection excelConnection = new OleDbConnection(connectionString); excelConnection.Open(); // This code will open excel file. OleDbCommand dbCommand = new OleDbCommand(strSQL, excelConnection); OleDbDataAdapter dataAdapter = new OleDbDataAdapter(dbCommand); // create data table DataTable dTable = new DataTable(); dataAdapter.Fill(dTable); // bind the datasource // dataBingingSrc.DataSource = dTable; // assign the dataBindingSrc to the DataGridView // dgvExcelList.DataSource = dataBingingSrc; // dispose used objects if (dTable.Rows.Count > 0) MessageBox.Show("Count:" + dTable.Rows.Count.ToString()); dTable.Dispose(); dataAdapter.Dispose(); dbCommand.Dispose(); excelConnection.Close(); excelConnection.Dispose(); }

    Read the article

  • How to call Android contacts list?

    - by aZn137
    Hi, I'm making an Android app, and need to call the phone's contact list. I need to call the contacts list function, pick a contact, then return to my app with the contact's name. Here's the code I got on the internet, but it doesnt work. Please help: import android.app.ListActivity; import android.content.Intent; import android.database.Cursor; import android.os.Bundle; import android.provider.Contacts.People; import android.view.View; import android.widget.ListAdapter; import android.widget.ListView; import android.widget.SimpleCursorAdapter; import android.widget.TextView; public class Contacts extends ListActivity { private ListAdapter mAdapter; public TextView pbContact; public static String PBCONTACT; public static final int ACTIVITY_EDIT=1; private static final int ACTIVITY_CREATE=0; // Called when the activity is first created. @Override public void onCreate(Bundle icicle) { super.onCreate(icicle); Cursor C = getContentResolver().query(People.CONTENT_URI, null, null, null, null); startManagingCursor(C); String[] columns = new String[] {People.NAME}; int[] names = new int[] {R.id.row_entry}; mAdapter = new SimpleCursorAdapter(this, R.layout.mycontacts, C, columns, names); setListAdapter(mAdapter); } // end onCreate() // Called when contact is pressed @Override protected void onListItemClick(ListView l, View v, int position, long id) { super.onListItemClick(l, v, position, id); Cursor C = (Cursor) mAdapter.getItem(position); PBCONTACT = C.getString(C.getColumnIndex(People.NAME)); // RHS 05/06 //pbContact = (TextView) findViewById(R.id.myContact); //pbContact.setText(new StringBuilder().append("b")); Intent i = new Intent(this, NoteEdit.class); startActivityForResult(i, ACTIVITY_CREATE); } }

    Read the article

  • backbone.js Model.get() returns undefined, scope using coffeescript + coffee toaster?

    - by benipsen
    I'm writing an app using coffeescript with coffee toaster (an awesome NPM module for stitching) that builds my app.js file. Lots of my application classes and templates require info about the current user so I have an instance of class User (extends Backbone.Model) stored as a property of my main Application class (extends Backbone.Router). As part of the initialization routine I grab the user from the server (which takes care of authentication, roles, account switching etc.). Here's that coffeescript: @user = new models.User @user.fetch() console.log(@user) console.log(@user.get('email')) The first logging statement outputs the correct Backbone.Model attributes object in the console just as it should: User _changing: false _escapedAttributes: Object _pending: Object _previousAttributes: Object _silent: Object attributes: Object account: Object created_on: "1983-12-13 00:00:00" email: "[email protected]" icon: "0" id: "1" last_login: "2012-06-07 02:31:38" name: "Ben Ipsen" roles: Object __proto__: Object changed: Object cid: "c0" id: "1" __proto__: ctor app.js:228 However, the second returns undefined despite the model attributes clearly being there in the console when logged. And just to make things even more interesting, typing "window.app.user.get('email')" into the console manually returns the expected value of "[email protected]"... ? Just for reference, here's how the initialize method compiles into my app.js file: Application.prototype.initialize = function() { var isMobile; isMobile = navigator.userAgent.match(/(iPhone|iPod|iPad|Android|BlackBerry)/); this.helpers = new views.DOMHelpers().initialize().setup_viewport(isMobile); this.user = new models.User(); this.user.fetch(); console.log(this.user); console.log(this.user.get('email')); return this; }; I initialize the Application controller in my static HTML like so: jQuery(document).ready(function(){ window.app = new controllers.Application(); }); Suggestions please and thank you!

    Read the article

  • Amazon access key showing in URL for Carrierwave and Fog

    - by kcurtin
    I just switched from storing my images uploaded via Carrierwave locally to using Amazon s3 via the fog gem in my Rails 3.1 app. While images are being added, when I click on an image in my application, the URL is providing my access key and a signature. Here is a sample URL (XXX replaced the string with the info): https://s3.amazonaws.com/bucketname/uploads/photo/image/2/IMG_4842.jpg?AWSAccessKeyId=XXX&Signature=XXX%3D&Expires=1332093418 This is happening in development (localhost:3000) and when I am using heroku for production. Here is my uploader: class ImageUploader < CarrierWave::Uploader::Base include CarrierWave::RMagick storage :fog def store_dir "uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}" end process :convert => :jpg process :resize_to_limit => [640, 640] version :thumb do process :convert => :jpg process :resize_to_fill => [280, 205] end version :avatar do process :convert => :jpg process :resize_to_fill => [120, 120] end end And my config/initializers/fog.rb : CarrierWave.configure do |config| config.fog_credentials = { :provider => 'AWS', :aws_access_key_id => 'XXX', :aws_secret_access_key => 'XXX', } config.fog_directory = 'bucketname' config.fog_public = false end Anyone know how to make sure this information isn't available?

    Read the article

  • Why qry.post executed with asynchronous mode?

    - by Ryan
    Recently I met a strange problem, see code snips as below: var sqlCommand: string; connection: TADOConnection; qry: TADOQuery; begin connection := TADOConnection.Create(nil); try connection.ConnectionString := 'Provider=Microsoft.Jet.OLEDB.4.0;Data Source=Test.MDB;Persist Security Info=False'; connection.Open(); qry := TADOQuery.Create(nil); try qry.Connection := connection; qry.SQL.Text := 'Select * from aaa'; qry.Open; qry.Append; qry.FieldByName('TestField1').AsString := 'test'; qry.Post; beep; finally qry.Free; end; finally connection.Free; end; end; First, Create a new access database named test.mdb and put it under the directory of this test project, we can create a new table named aaa in it which has only one text type field named TestField1. We set a breakpoint at line of "beep", then lunch the test application under ide debug mode, when ide stops at the breakpoint line (qry.post has been executed), at this time we use microsoft access to open test.mdb and open table aaa you will find there are no any changes in table aaa, if you let the ide continue running after pressing f9 you can find a new record is inserted in to table aaa, but if you press ctrl+f2 to terminate the application at the breakpoint, you will find the table aaa has no record been inserted, but in normal circumstance, a new record should be inserted in to the table aaa after qry.post executed. who can explain this problem , it troubles me so long time. thanks !!! BTW, the ide is delphi 2010, and the access mdb file is created by microsoft access 2007 under windows 7

    Read the article

  • Dual usage of asp.net mvc and php under same domain

    - by jim
    Hello all, I've got a scenario where we have a customer who has a linux hosted php app (joomla) that they wish to integrate with some back-end asp.net mvc functionality that was created for a 'sister' site. Basically, the mvc site has prices and stock availability methods which (in the sister site) populates dropdown lists and other 'order' style info on the pages. I've been tasked with looking at the integration options to allow the php site to use this info as a 'service'. (as ever, these guys are looking at cost of ownership, maintenence etc, so this is their preferred route) Has anyone done anything similar with success?? I'd imagine (much like the sister site) liberal doses of ajax will be employed in order to populate portions of the page on demand. So this may have a bearing on any suggestions that you may have. Also, the methods that are being called ultimately end up populating the same database, so there are no issues with correlating the ID's across the different platforms. I don't really want to go down any 'iframe' type route if at all possible, tho' reality may dictate this as being an option. I'm possibly (naively) imagining that i could simply invoke the mvc functions directly from the php app with some sort of 'session' variable being passed for authentication. pretty tall order or pretty straightfwd?? cheers jim

    Read the article

  • Play framework does not return page and static content

    - by Anton
    I'm using play framework in production for one of my web projects. From time to time Play does not render main page or does not return some of the static content files. I have attached few screenshots below. First screenshot displays firebug console, loading of the site is stucked at the beginning, when serving home page. Second screenshot display fiddler console, when 2 static resources are not loading. This issue is hard to reproduce, it happens 1 of 15 time, I have to delete cache data and reload page. (pressing CRTL-F5 in FF). Issue can be reproduced in most of the browsers. Initially, I was thinging that there is something wrong with hosting provider. But I have changed hosting provided and issue has not gone. Version of the play is 1.2.2. Play is running as standalone server. Not sure, but probably deploying Play to Jetty/Tomcat/Resin would help. Also I'm thinging about rewriting application to another stack (well-known for me - j2ee, spring, whatever) I have no idea how to debug and resolve this issue. Any clue ? Has anyone faced same issue with Play before ?

    Read the article

  • ASP.NET MVC Authorize by Group

    - by Jimmo
    I have what seems like a common issue with SaaS applications, but have not seen this question on here anywhere. I am using ASP.NET MVC with Forms Authentication. I have implemented a custom membership provider to handle logic, but have one issue (perhaps the issue is in my mental picture of the system). As with many SaaS apps, Customers create accounts and use the app in a way that looks like they are the only ones present (they only see their items, users, etc.) In reality, there are generic controllers and views presenting data depending on their account. When calling something like ValidateUser, I have access to their affiliation in the User object - what I don't have is the context of the request to which to compare it. As an example, One company called ABC goes to abc.mysite.com Another company called XYZ goes to xyz.mysite.com When an ABC user calls http://abc.mysite.com/product/edit/12 I have an [Authorize] attribute on the Edit method in the ProductController to make sure he is signed in and has sufficient permission to do so. If that same ABC user tried to access http://xyz.mysite.com/product/edit/12 I would not want to validate him in the context of that call. In the ValidateUser of the MembershipProvider, I have the information about the user, but not about the request. I can tell that the user is from ABC, but I cannot tell that the request is for XYZ at that point in the code. How should I resolve this?

    Read the article

  • Is it possible to receive SMS message on appWidget?

    - by cappuccino
    Is it possible to receive SMS message on appWidget? I saw android sample source(API Demos). In API Demos, ExampleAppWidgetProvider class extends AppWidgetProvider, not Activity. So, I guess it is impossible to regist SMS Receiver like this, rcvIncoming = new BroadcastReceiver() { @Override public void onReceive(Context context, Intent intent) { Log.i("telephony", "SMS received"); Bundle data = intent.getExtras(); if (data != null) { // SMS uses a data format known as a PDU Object pdus[] = (Object[]) data.get("pdus"); String message = "New message:\n"; String sender = null; for (Object pdu : pdus) { SmsMessage part = SmsMessage.createFromPdu((byte[])pdu); message += part.getDisplayMessageBody(); if (sender == null) { sender = part.getDisplayOriginatingAddress(); } } Log.i(sender, message); } } }; registerReceiver(rcvIncoming, new IntentFilter("android.provider.Telephony.SMS_RECEIVED")); My goal is to receive SMS message on my custom appWidget. Any help would be appreciated!!

    Read the article

  • Java Client .class File Protection

    - by Zac
    I am in the requirements phase of building a JEE application that will most likely run on a GlassFish/JBoss backend (doesn't matter for now). I know I shouldn't be thinking about architecture at requirements time, but one can't help but start to imagine how the components would all snap together :-) Here are some hard, non-flexible requirements on the client-side: (1) The client application will be a Swing box (2) The client is free to download, but will use a subscription model (thus requiring a login mechanism with server-side authentication/authorization, etc.) (3) Yes, Java is the best platform solution for the problem at hand for reasons outside the scope of this post (4) The client-side .class files need safeguarding against decompiling That last (4th) requirement is the basis of this post. I'm not really worried about someone actually decompiling and getting at my source code: in the end, it's just Swing controls driven by some lightweight business logic. I'm worried about a scenario where someone decompiles my code, modifies it to exploit/attack the server, re-compiles, and fires it up. I've envisioned all sorts of nasty solutions, but didn't know if this was a common problem with a common solution for JEE developers. Any thoughts? Not interested in "code obfuscation" techniques! Thanks for any input!

    Read the article

  • Which tools to use and how to find file descriptors leaking from Glassfish?

    - by cclark
    We release new code to production every week and Glassfish hasn't had any problems. This weekend we had to move racks at our hosting provider. There were not any code changes (they just powered off, moved, re-racked and powered on) but we're on a new network infrastructure and suddenly we're leaking file descriptors like a sieve. So I'm guessing there is some sort of connection attempting to be made which now fails due to a network change. I'm running Glassfish v2ur2-b04/AS9.1_02 on RHEL4 with an embedded IMQ instance. After the move I started seeing: [#|2010-04-25T05:34:02.783+0000|SEVERE|sun-appserver9.1|javax.enterprise.system.container.web|_ThreadID=33;_ThreadName=SelectorThread-?4848;_RequestID=c4de6f6d-c1d6-416d-ac6e-49750b1a36ff;|WEB0756: Caught exception during HTTP processing. java.io.IOException: Too many open files at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) ... [#|2010-04-25T05:34:03.327+0000|WARNING|sun-appserver9.1|javax.enterprise.system.stream.err|_ThreadID=34;_ThreadName=Timer-1;_RequestID=d27e1b94-d359-4d90-a6e3-c7ec49a0f383;|java.lang.NullPointerException at com.sun.jbi.management.system.AutoAdminTask.pollAutoDirectory(AutoAdminTask.java:1031) Using lsof I check the number of file descriptors and I see quite a few entries which look like: java 18510 root 8556u sock 0,4 1555182 can't identify protocol java 18510 root 8557u sock 0,4 1555320 can't identify protocol java 18510 root 8558u sock 0,4 1555736 can't identify protocol java 18510 root 8559u sock 0,4 1555883 can't identify protocol If I do a count of open file descriptors every minute I see it growing by 12 every minute. I have no idea what these sockets are. I've undeployed my application so there is only a plain Glassfish instance running and I still see it leaking 12 file descriptors a minute. So I think this leak is in Glassfish or potentially IMQ. What approach should I take to tracking down these sockets of unknown protocol? What tools can I use (or flags can I pass to lsof) to get more information about where to look? thanks, chuck

    Read the article

  • How to get resultset with stored procedure calls over two linked servers?

    - by räph
    I have problems filling a temporary table with the resultset from a procedure call on a linked server, in which again a procedure on another server is called. I have a Stored Procedure sproc1 with the following code, which calls another procedure sproc2 on a linked server. SET @sqlCommand = 'INSERT INTO #tblTemp ( ModuleID, ParamID) ' + '( SELECT * FROM OPENQUERY(' + @targetServer + ', ' + '''SET FMTONLY OFF; EXEC ' + @targetDB + '.usr.sproc2 ' + @param + ''' ) )' exec ( @sqlCommand ) Now in the called sproc2 I again call a third procedure sproc3 on another linked server, which returns my resultset. SET @sqlCommand = 'EXEC ' + @targetServer +'.database.usr.sproc3 ' + @param exec ( @sqlCommand ) The whole thing doen't work, as I get an SQL error 7391 The operation could not be performed because OLE DB provider "%ls" for linked server "%ls" was unable to begin a distributed transaction. I already checked the hints at this microsoft article, but without success. But maybe, I can change the code in sproc1. Would there be some alternative to the temp table and the open query? Just calling stored procedures from server A to B to C and returning the resultset is working (I do this often in the application). But this special case with the temp table and openquery doesn't work! Or is it just not possible what I am trying to do? The microsft article states: Check the object you refer on the destination server. If it is a view or a stored procedure, or causes an execution of a trigger, check whether it implicitly references another server. If so, the third server is the source of the problem. Run the query directly on the third server. If you cannot run the query directly on the third server, the problem is not actually with the linked server query. Resolve the underlying problem first. Is this my case? PS: I can't avoid the architecture with the three servers.

    Read the article

  • [android] Is it possible to receive SMS message on appWidget?

    - by cappuccino
    [Android] Hi, everyone. Is it possible to receive SMS message on appWidget? I saw android sample soucrce(API Demos). In API Demos, ExampleAppWidgetProvider class extends AppWidgetProvider, not Activity. So, I guess it is impossible to regist SMS Receiver like this, rcvIncoming = new BroadcastReceiver() { @Override public void onReceive(Context context, Intent intent) { Log.i("telephony", "SMS received"); Bundle data = intent.getExtras(); if (data != null) { // SMS uses a data format known as a PDU Object pdus[] = (Object[]) data.get("pdus"); String message = "New message:\n"; String sender = null; for (Object pdu : pdus) { SmsMessage part = SmsMessage.createFromPdu((byte[])pdu); message += part.getDisplayMessageBody(); if (sender == null) { sender = part.getDisplayOriginatingAddress(); } } Log.i(sender, message); } } }; registerReceiver(rcvIncoming, new IntentFilter("android.provider.Telephony.SMS_RECEIVED")); My goal is to receive SMS message on my custom appWidget. any help would be appreciated!!

    Read the article

  • Ubuntu server or Debian server (to run C++ apps developed on Ubuntu)

    - by skyeagle
    I have written a number of C++ server side daemons for my website, using my Ubuntu 9.10 dev machine. I am now about to venture out to look for a hosting provider etc. This is my problem: I have read on many posts (admittedly old posts) that Debian server is much more robust than Ubuntu server - is this till the case?. In particular, I am constantly "raising elephants" with my Ubuntu 9.10 - this is "ok" for home use, but for a website server, I would not be so forgiving. Also, there seems to be a new "patch" every few weeks - which I would not like on a server (I want to leave the server well alone, and let it get on with its business of serving pages). So in this instance Debian looks a more attractive proposition. I am worried that the C++ apps I have developed on Ubuntu may not be binary compatable with Debian (or I may need to install additional libraries/packages etc to get things to work), and I have zero experience with Debian. Additionally, I dont want to be grappling with the learning curve associated with a new OS, whilst trying to launch a new web site (I am assuming Debian UI is quite different from Ubuntu). In this case, the maxim "the devil you know is better than the one you dont", seems appropriate - and I find Ubuntu a more attractive proposition (atleast I know my apps will run without any probs etc). Can anyone provide some rational advice (based on actual experience), to help me decide which route to take - given the two (conflicting) trains of thought outlined above?

    Read the article

  • ASP.NET MVC Authorize by Subdomain

    - by Jimmo
    I have what seems like a common issue with SaaS applications, but have not seen this question on here anywhere. I am using ASP.NET MVC with Forms Authentication. I have implemented a custom membership provider to handle logic, but have one issue (perhaps the issue is in my mental picture of the system). As with many SaaS apps, customers create accounts and use the app in a way that looks like they are the only ones present (they only see their items, users, etc.). In reality, there are generic controllers and views presenting data depending on the customer represented in the URL. When calling something like the MembershipProvider.ValidateUser, I have access to the user's customer affiliation in the User object - what I don't have is the context of the request to compare whether it is a data request for the same customer as the user. As an example, One company called ABC goes to abc.mysite.com Another company called XYZ goes to xyz.mysite.com When an ABC user calls http://abc.mysite.com/product/edit/12 I have an [Authorize] attribute on the Edit method in the ProductController to make sure he is signed in and has sufficient permission to do so. If that same ABC user tried to access http://xyz.mysite.com/product/edit/12 I would not want to validate him in the context of that call. In the ValidateUser of the MembershipProvider, I have the information about the user, but not about the request. I can tell that the user is from ABC, but I cannot tell that the request is for XYZ at that point in the code. How should I resolve this?

    Read the article

  • WCF service consuming passively issued SAML token

    - by Neillyboy
    What is the best way to pass an existing SAML token from a website already authenticated via a passive STS? We have built an Identity Provider which is issuing passive claims to the website for authentication. We have this working. Now we would like to add some WCF services into the mix - calling them from the context of the already authenticated web application. Ideally we would just like to pass the SAML token on without doing anything to it (i.e. adding new claims / re-signing). All of the examples I have seen require the ActAs sts implementation - but is this really necessary? This seems a bit bloated for what we want to achieve. I would have thought a simple implementation passing the bootstrap token into the channel - using the CreateChannelActingAs or CreateChannelWithIssuedToken mechanism (and setting ChannelFactory.Credentials.SupportInteractive = false) to call the WCF service with the correct binding (what would that be?) would have been enough. We are using the Fabrikam example code as reference, but as I say, think the ActAs functionality here is overkill for what we are trying to achieve.

    Read the article

  • SQL Server 2008 - db mail issue

    - by Chris
    Hello. I have two instances of SQL Server 2008. One was upgraded from SQL Server 2000 and one was a clean, new install. SQL Mail operates perfectly on both instances. DB Mail operates perfectly on the newly installed instance. On the upgraded instance, DB Mail does not send any mail. Of course, I am not positive that the fact this instance is upgraded has anything to do with the issue, but it might. The configuration of my db mail profile and account looks identical to my functioning instance. In the configuration of the 'alerts' tab in the SQL Agent properties i have tried selecting both DB Mail and SQL Mail to no avail. Both instances use the same SMTP server with the same authentication (domain with db engine account). All messages sent via sp_send_db mail and those sent via the 'test email' option are visible in the sysmail_allitems queue and remain there as 'unsent'. The send_status eventually changes to 'failed'. The only messages in the sysmail_event_log are 'mail queue stopped by login domain\myuser', 'mail queue started by login domain/myuser' and 'activiation successful.'. selecting from the externalmailqueue has the same number of rows as sysmail_allitems. i have tried bouncing the agent, the entire instance and moving the other functioning instance to the other node in the cluster. any thoughts? thx.

    Read the article

  • How to test for existence of a script-scoped variable in PowerShell?

    - by Damian Powell
    Is it possible to test for the existence of a script-scoped variable in PowerShell? I've been using the PowerShell Community Extensions (PSCX) but I've noticed that if you import the module while Set-PSDebug -Strict is set, an error is produced: The variable '$SCRIPT:helpCache' cannot be retrieved because it has not been set. At C:\Users\...\Modules\Pscx\Modules\GetHelp\Pscx.GetHelp.psm1:5 char:24 While investigating how I might fix this, I found this piece of code in Pscx.GetHelp.psm1: #requires -version 2.0 param([string[]]$PreCacheList) if ((!$SCRIPT:helpCache) -or $RefreshCache) { $SCRIPT:helpCache = @{} } This is pretty straight forward code; if the cache doesn't exist or needs to be refreshed, create a new, empty cache. The problem is that calling $SCRIPT:helpCache while Set-PSDebug -Strict is in force casues the error because the variable hasn't been defined yet. Ideally, we could use a Test-Variable cmdlet but such a thing doesn't exist! I thought about looking in the variable: provider but I don't know how to determine the scope of a variable. So my question is: how can I test for the existence of a variable while Set-PSDebug -Strict is in force, without causing an error?

    Read the article

  • How to handle authenticated user access to resources in document oriented system?

    - by Jeremy Raymond
    I'm developing a document oriented application and need to manage user access to the documents. I have a module that handles user authentication, and another module that handles document CRUD operations on the data store. Once a user is authenticated I need to enforce what operations the user can and cannot perform to documents based upon the user's permissions. The best option I could think of to integrate these two pieces together would be to create another module that duplicates the data API but that also takes the authenticated user as a parameter. The module would delegate the authorization check to the auth module and delegate the document operation to the data access module. Something like: -module(auth_data_access). % User is authenticated (logged into the system) % save_doc validates if user is allowed to save the given document and if so % saves it returning ok, else returns {error, permission_denied} save_doc(Doc, User) -> case auth:save_allowed(Doc, User) of ok -> data_access:save_doc(Doc); denied -> {error, permission_denied} end end. Is there a better way I can handle this?

    Read the article

  • "Session is Closed!" - NHibernate

    - by Alexis Abril
    This is in a web application environment: An initial request is able to successfully complete, however any additional requests return a "Session is Closed" response from the NHibernate framework. I'm using a HttpModule approach with the following code: public class MyHttpModule : IHttpModule { public void Init(HttpApplication context) { context.EndRequest += ApplicationEndRequest; context.BeginRequest += ApplicationBeginRequest; } public void ApplicationBeginRequest(object sender, EventArgs e) { CurrentSessionContext.Bind(SessionFactory.Instance.OpenSession()); } public void ApplicationEndRequest(object sender, EventArgs e) { ISession currentSession = CurrentSessionContext.Unbind( SessionFactory.Instance); currentSession.Dispose(); } public void Dispose() { } } SessionFactory.Instance is my singleton implementation, using FluentNHibernate to return an ISessionFactory object. In my repository class, I attempt to use the following syntax: public class MyObjectRepository : IMyObjectRepository { public MyObject GetByID(int id) { using (ISession session = SessionFactory.Instance.GetCurrentSession()) return session.Get<MyObject>(id); } } This allows code in the application to be called as such: IMyObjectRepository repo = new MyObjectRepository(); MyObject obj = repo.GetByID(1); I have a suspicion my repository code is to blame, but I'm not 100% sure on the actual implementation I should be using. I found a similar issue on SO here. I too am using WebSessionContext in my implementation, however, no solution was provided other than writing a custom SessionManager. For simple CRUD operations, is a custom session provider required apart from the built in tools(ie WebSessionContext)?

    Read the article

  • Passing Derived Class Instances as void* to Generic Callbacks in C++

    - by Matthew Iselin
    This is a bit of an involved problem, so I'll do the best I can to explain what's going on. If I miss something, please tell me so I can clarify. We have a callback system where on one side a module or application provides a "Service" and clients can perform actions with this Service (A very rudimentary IPC, basically). For future reference let's say we have some definitions like so: typedef int (*callback)(void*); // This is NOT in our code, but makes explaining easier. installCallback(string serviceName, callback cb); // Really handled by a proper management system sendMessage(string serviceName, void* arg); // arg = value to pass to callback This works fine for basic types such as structs or builtins. We have an MI structure a bit like this: Device <- Disk <- MyDiskProvider class Disk : public virtual Device class MyDiskProvider : public Disk The provider may be anything from a hardware driver to a bit of glue that handles disk images. The point is that classes inherit Disk. We have a "service" which is to be notified of all new Disks in the system, and this is where things unravel: void diskHandler(void *p) { Disk *pDisk = reinterpret_cast<Disk*>(p); // Uh oh! // Remainder is not important } SomeDiskProvider::initialise() { // Probe hardware, whatever... // Tell the disk system we're here! sendMessage("disk-handler", reinterpret_cast<void*>(this)); // Uh oh! } The problem is, SomeDiskProvider inherits Disk, but the callback handler can't receive that type (as the callback function pointer must be generic). Could RTTI and templates help here? Any suggestions would be greatly appreciated.

    Read the article

  • How to make a request from an android app that can enter a Spring Security secured webservice method

    - by johnrock
    I have a Spring Security (form based authentication) web app running CXF JAX-RS webservices and I am trying to connect to this webservice from an Android app that can be authenticated on a per user basis. Currently, when I add an @Secured annotation to my webservice method all requests to this method are denied. I have tried to pass in credentials of a valid user/password (that currently exists in the Spring Security based web app and can log in to the web app successfully) from the android call but the request still fails to enter this method when the @Secured annotation is present. The SecurityContext parameter returns null when calling getUserPrincipal(). How can I make a request from an android app that can enter a Spring Security secured webservice method? Here is the code I am working with at the moment: Android call: httpclient.getCredentialsProvider().setCredentials( //new AuthScope("192.168.1.101", 80), new AuthScope(null, -1), new UsernamePasswordCredentials("joeuser", "mypassword")); String userAgent = "Android/" + getVersion(); HttpGet httpget = new HttpGet(MY_URI); httpget.setHeader("User-Agent", userAgent); httpget.setHeader("Content-Type", "application/xml"); HttpResponse response; try { response = httpclient.execute(httpget); HttpEntity entity = response.getEntity(); ... parse xml Webservice Method: @GET @Path("/payload") @Produces("application/XML") @Secured({"ROLE_USER","ROLE_ADMIN","ROLE_GUEST"}) public Response makePayload(@Context Request request, @Context SecurityContext securityContext){ Payload payload = new Payload(); payload.setUsersOnline(new Long(200)); if (payload == null) { return Response.noContent().build(); } else{ return Response.ok().entity(payload).build(); } }

    Read the article

  • "NT AUTHORITY\ANONYMOUS LOGON" error in Windows 7 (ASP.NET & Web Service)

    - by Tony_Henrich
    I have an asp.net web app which works fine in Windows XP machine in a domain. I am porting it to a Windows 7 stand alone machine. The app uses a web service which makes a call to sql server. The web server (IIS 7.5) and SQL Server are on the same stand alone machine. I enabled Windows authentication for the website and web service. The web service uses a trusted connection connection string. The web service credentials uses System.Net.CredentialCache.DefaultCredentials. I noticed username, password and domainname are blank after the call! The webservice and web site use an application pool with identity "Network Service". I am getting an exception "NT AUTHORITY\ANONYMOUS LOGON" in the database call in the web service. I am assuming it's related to the blank credentials. I am expecting ASPNET user to be the security token to the database. Why is this not happening? (Usually this happens when sql server and web server are on two different machines in a domain, delegation & double hopping, but in my case everything is on a dev box)

    Read the article

< Previous Page | 335 336 337 338 339 340 341 342 343 344 345 346  | Next Page >