Search Results

Search found 56181 results on 2248 pages for 'application context'.

Page 733/2248 | < Previous Page | 729 730 731 732 733 734 735 736 737 738 739 740  | Next Page >

  • How to import data to SAP

    - by Mehmet AVSAR
    Hi, As a complete stranger in town of SAP, I want to transfer my own application's (mobile salesforce automation) data to SAP. My application has records of customers, stocks, inventory, invoices (and waybills), cheques, payments, collections, stock transfer data etc. I have an additional database which holds matchings of records. ie. A customer with ID 345 in my application has key 120-035-0223 in SAP. Every record, for sure, has to know it's counterpart, including parameters. After searching Google and SAP help site for a day, I covered that it's going to be a bit more pain than I expected. Especially SAP site does not give even a clue on it. Say I couldn't find. We transferred our data to some other ERP systems, some of which wanted XML files, some other exposed their APIs. My point is, is Sql Server's SSIS an option for me? I hope it is, so I can fight on my own territory. Since client requests would vary a lot, I count flexibility as most important criteria. Also, I want to transfer as much data as I could. Any help is appreciated. Regards,

    Read the article

  • How to effectively use WorkbookBeforeClose event correctly?

    - by Ahmad
    On a daily basis, a person needs to check that specific workbooks have been correctly updated with Bloomberg and Reuters market data ie. all data has pulled through and that the 'numbers look correct'. In the past, people were not checking the 'numbers' which led to inaccurate uploads to other systems etc. The idea is that 'something' needs to be developed to prevent the use from closing/saving the workbook unless he/she has checked that the updates are correct/accurate. The numbers look correct action is purely an intuitive exercise, thus will not be coded in any way. The simple solution was to prompt users prior to closing the specific workbook to verify that the data has been checked. Using VSTO SE for Excel 2007, an Add-in was created which hooks into the WorkbookBeforeClose event which is initialised in the add-in ThisAddIn_Startup private void wb_BeforeClose(Xl.Workbook wb, ref bool cancel) { //.... snip ... if (list.Contains(wb.Name)) { DailogResult result = MessageBox.Show("some message", "sometitle", MessageBoxButtons.YesNo); if (result != DialogResult.Yes) { cancel = true; // i think this prevents the whole application from closing } } } I have found the following ThisApplication.WorkbookBeforeSave vs ThisWorkbook.Application.WorkbookBeforeSave which recommends that one should use the ThisApplication.WorkbookBeforeClose event which I think is what I am doing since will span all files opened. The issue I have with the approach is that assuming that I have several files open, some of which are in my list, the event prevents Excel from closing all files sequentially. It now requires each file to be closed individually. Am I using the event correctly and is this effective & efficient use of the event? Should I use the Application level event or document level event? Is there a way to prevent the above behaviour? Any other suggestions are welcomed VS 2005 with VSTO SE

    Read the article

  • Securely store a password in program code?

    - by Nick
    My application makes use of the RijndaelManaged class to encrypt data. As a part of this encryption, I use a SecureString object loaded with a password which get's get converted to a byte array and loaded into the RajindaelManaged object's Key at runtime. The question I have is the storage of this SecureString. A user entered password can be entered at run-time, and that can be "securely" loaded into a SecureString object, but if no user entered password is given, then I need to default to something. So ultimately the quesiton comes down to: If I have to have some known string or byte array to load into a SecureString object each time my application runs, how do I do that? The "encrypted" data ultimately gets decrypted by another application, so even if no user entered password is specified, I still need the data to be encrypted while it goes from one app to another. This means I can't have the default password be random, because the other app wouldn't be able to properly decrypt it. One possible solution I'm thinking is to create a dll which only spits out a single passphrase, then I use that passphrase and run it through a couple of different hashing/reorganizing functions at runtime before I ultimately feed it into the secureString object. Would this be secure enough?

    Read the article

  • Passing a LINQ DataRow Reference in a GridView's ItemTemplate

    - by Bob Kaufman
    Given the following GridView: <asp:GridView runat="server" ID="GridView1" AutoGenerateColumns="false" DataKeyNames="UniqueID" OnSelectedIndexChanging="GridView1_SelectedIndexChanging" > <Columns> <asp:BoundField HeaderText="Remarks" DataField="Remarks" /> <asp:TemplateField HeaderText="Listing"> <ItemTemplate> <%# ShowListingTitle( ( ( System.Data.DataRowView ) ( Container.DataItem ) ).Row ) %> </ItemTemplate> </asp:TemplateField> <asp:BoundField HeaderText="Amount" DataField="Amount" DataFormatString="{0:C}" /> </Columns> </asp:GridView> which refers to the following code-behind method: protected String ShowListingTitle( DataRow row ) { Listing listing = ( Listing ) row; return NicelyFormattedString( listing.field1, listing.field2, ... ); } The cast from DataRow to Listing is failing (cannot convert from DataRow to Listing) I'm certain the problem lies in what I'm passing from within the ItemTemplate, which is simply not the right reference to the current record from the LINQ to SQL data set that I've created, which looks like this: private void PopulateGrid() { using ( MyDataContext context = new MyDataContext() ) { IQueryable < Listing > listings = from l in context.Listings where l.AccountID == myAccountID select l; GridView1.DataSource = listings; GridView1.DataBind(); } }

    Read the article

  • Calling unmanaged dll from C#. Take 2

    - by Charles Gargent
    I have written a c# program that calls a c++ dll that echoes the commandline args to a file When the c++ is called using the rundll32 command it displays the commandline args no problem, however when it is called from within the c# it doesnt. I asked this question to try and solve my problem, but I have modified it my test environment and I think it is worth asking a new question. Here is the c++ dll #include "stdafx.h" #include "stdlib.h" #include <stdio.h> #include <iostream> #include <fstream> using namespace std; BOOL APIENTRY DllMain( HANDLE hModule, DWORD ul_reason_for_call, LPVOID lpReserved ) { return TRUE; } extern "C" __declspec(dllexport) int WINAPI CMAKEX( HWND hwnd, HINSTANCE hinst, LPCSTR lpszCommandLine, DWORD dwReserved) { ofstream SaveFile("output.txt"); SaveFile << lpszCommandLine; SaveFile.close(); return 0; } Here is the c# app using System; using System.Collections.Generic; using System.Text; using System.Security.Cryptography; using System.Runtime.InteropServices; using System.Net; namespace nac { class Program { [DllImport("cmakca.dll", SetLastError = true, CharSet = CharSet.Unicode)] static extern bool CMAKEX(IntPtr hwnd, IntPtr hinst, string lpszCmdLine, int nCmdShow); static void Main(string[] args) { string cmdLine = @"/source_filename proxy-1.txt /backup_filename proxy.bak /DialRasEntry NULL /TunnelRasEntry DSLVPN /profile ""C:\Documents and Settings\Administrator\Application Data\Microsoft\Network\Connections\Cm\dslvpn.cmp"""; const int SW_SHOWNORMAL = 1; CMAKEX(IntPtr.Zero, IntPtr.Zero, cmdLine, SW_SHOWNORMAL).ToString(); } } } The output from the rundll32 command is rundll32 cmakex.dll,CMAKEX /source_filename proxy-1.txt /backup_filename proxy.bak /DialRasEntry NULL /TunnelRasEntry DSLVPN /profile ""C:\Documents and Settings\Administrator\Application Data\Microsoft\Network\Connections\Cm\dslvpn.cmp" /source_filename proxy-1.txt /backup_filename proxy.bak /DialRasEntry NULL /TunnelRasEntry DSLVPN /profile ""C:\Documents and Settings\Administrator\Application Data\Microsoft\Network\Connections\Cm\dslvpn.cmp" however the output when the c# app runs is /

    Read the article

  • From VB6 to .net via COM and Remoting...What a mess!

    - by Robert
    I have some legacy vb6 applications that need to talk to my .Net engine application. The engine provides an interface that can be connected to via .net Remoting. Now I have a stub class library that wraps all of the types that the interface exposes. The purpose of this stub is to translate my .net types into COM-friendly types. When I run this class library as a console application, it is able to connect to the engine, call various methods, and successfully return the wrapped types. The next step in the chain is to allow my VB6 application to call this COM enabled stub. This works fine for my main engine-entry type (IModelFetcher which is wrapped as COM_ModelFetcher). However, when I try and get any of the model fetcher's model types (IClientModel, wrapped as COM_IClientModel, IUserModel, wrapped as COM_IUserModel, e.t.c.), I get the following exception: [Exception - type: System.InvalidCastException 'Return argument has an invalid type.'] in mscorlib at System.Runtime.Remoting.Proxies.RealProxy.ValidateReturnArg(Object arg, Type paramType) at System.Runtime.Remoting.Proxies.RealProxy.PropagateOutParameters(IMessage msg, Object[] outArgs, Object returnValue) at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg) at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type) at AWT.Common.AWTEngineInterface.IModelFetcher.get_ClientModel() at AWT.Common.AWTEngineCOMInterface.COM_ModelFetcher.GetClientModel() The first thing I did when I saw this was to handle the 'AppDomain.CurrentDomain.AssemblyResolve' event, and this allowed me to load the required assemblies. However, I'm still getting this exception now. My AssemblyResolve event handler is loading three assemblies correctly, and I can confirm that it does not get called prior to this exception. Can someone help me untie myself from this mess of interprocess communication?!

    Read the article

  • How to delete a large cookie that causes Apache to 400

    - by jakemcgraw
    I've come across an issue where a web application has managed to create a cookie on the client, which, when submitted by the client to Apache, causes Apache to return the following: HTTP/1.1 400 Bad Request Date: Mon, 08 Mar 2010 21:21:21 GMT Server: Apache/2.2.3 (Red Hat) Content-Length: 7274 Connection: close Content-Type: text/html; charset=iso-8859-1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>400 Bad Request</title> </head><body> <h1>Bad Request</h1> <p>Your browser sent a request that this server could not understand.<br /> Size of a request header field exceeds server limit.<br /> <pre> Cookie: ::: A REALLY LONG COOKIE ::: </pre> </p> <hr> <address>Apache/2.2.3 (Red Hat) Server at www.foobar.com Port 80</address> </body></html> After looking into the issue, it would appear that the web application has managed to create a really long cookie, over 7000 characters. Now, don't ask me how the web application was able to do this, I was under the impression browsers were supposed to prevent this from happening. I've managed to come up with a solution to prevent the cookies from growing out of control again. The issue I'm trying to tackle is how do I reset the large cookie on the client if every time the client tries to submit a request to Apache, Apache returns a 400 client error? I've tried using the ErrorDocument directive, but it appears that Apache bails on the request before reaching any custom error handling.

    Read the article

  • Why would javascript click-areas not be working in IE8?

    - by Edward Tanguay
    I'm trying to find a bug in an old ASP.NET application which causes IE8 to not be able to click on the following "button" area in our application: <td width="150px" class="ctl00_CP1_UiCommandManager1i toolBarItem" valign="middle" onmouseout="onMouseOverCommand(this,1,'ctl00_CP1_UiCommandManager1',0,0);" onmouseover="onMouseOverCommand(this,0,'ctl00_CP1_UiCommandManager1',0,0);" onmousedown="onMouseDownCommand(this, 'ctl00_CP1_UiCommandManager1', 0, 0);" onmouseup="onMouseUpCommand(this, 'ctl00_CP1_UiCommandManager1', 0, 0);" id="ctl00_CP1_UiCommandManager1_0_0"> <span style="width:100%;overflow:hidden;text-overflow:ellipsis;vertical-align:middle;white-space:nowrap;"> NEW </span> </td> When we switch IE8 to IE7 compatibility mode, the problem disappears, IE7 is able to click on it. Since the above HTML is generated by a third party control (Janus, http://www.janusys.com/controls), we don't have the source code. has anyone experienced any similar problems with IE8? I've determined that it actually fires the onMouseDownCommand command also the CSS of the button area is different in IE8, it doesn't have color shading that it does in IE7. I can imagine that somewhere the HTML is not valid and IE8 being stricter is not playing along, but where? any advice on how to narrow in on this bug welcome ANSWER: Turned out to be that the application was not checking the navigator.agent for "MSIE 8.0" and was thus treating IE8 has a non-Internet-Explorer browser. Thanks Lazarus for the tip, the IE8 Javascript debugger is very nice, like a Firebug for IE, will be using it more!

    Read the article

  • HttpHandler and XML files

    - by Frank
    Hello, I would like to intercept any request made to the server for XML files. I thought that it might be possible with an HttpHandler. It's coded and it works... on localhost only (?!?!). So, why is it working on localhost only? Here is my web.config <?xml version="1.0" encoding="utf-8"?> <configuration> <system.web> <httpHandlers> <add verb="*" path="*.xml" type="FooBar.XmlHandler, FooBar" /> </httpHandlers> </system.web> </configuration> Here is my C# : namespace FooBar { public class XmlHandler : IHttpHandler { public bool IsReusable { get { return false; } } public void ProcessRequest(HttpContext context) { HttpResponse Response = context.Response; Response.Write(xmlString); } } } As you might have seen, I'm writing the xmlString directly in the response, it's only temporary because I'm still wondering how I could give the filename instead (that's the second question ;) ) What is supposed to be written in the response is only the xml filename that will be retrieved by a flash app. Thanks Edit : When calling the page from another computer it looks like it's not getting to the HttpHandler. However, the mapping for IIS have been done correctly.

    Read the article

  • Scrolling screen upward to expose TextView above keyboard

    - by Matt Winters
    I think I'm missing something obvious and would appreciate an answer. I have a view with a 2-section grouped tableView, each section having one row and a textView, the heights of the rows 335 and 140. This allows for a box with nicely rounded corners to type text into when the keyboard appears (140 height section) and when the keyboard is dismissed, a nice box to read more text (notes); most of the time, use is without the keyboard. I also added a toolbar at the bottom of the screen to scroll up above the keyboard. A button on the toolbar dismisses the keyboard. This last part works fine with the keyboard going up and down using a notification and the following code in a keyboardWillShow method: [UIView beginAnimations:@"showKeyboardAnimation" context:nil]; [UIView setAnimationDuration:0.50]; self.view.frame = CGRectMake(self.view.frame.origin.x, self.view.frame.origin.y, self.view.frame.size.width, self.view.frame.size.height - 216); [UIView commitAnimations]; But with the above code, the 2 sections of the tableView remain unscrolled, only the toolbar and the keyboard move. With the following code (found both in previous posts), both the toolbar and the tableView sections move. [UIView beginAnimations:nil context:NULL]; [UIView setAnimationDuration:0.50]; CGRect rect = self.view.frame; rect.origin.y -= 216; self.view.frame = rect; [UIView commitAnimations]; Now I know that I have to tweak the numbers to get the everything as I want it but my first question is what is substantively different between the 2 sets of code that the sections move in the 2nd but not in the 1st? The toolbar also moves with the 2nd code. The second question is, am I going to be able to scroll the smaller height section from off the screen to above the keyboard while at the same time moving the toolbar up just 216? Thanks

    Read the article

  • Keyword 'this'(Me) is not available calling the base constructor

    - by serhio
    In the inherited class I use the base constructor, but can't use class's members calling this base constructor. In this example I have a PicturedLabel that knows it's own color and has a image. A TypedLabel : PictureLabel knows it's type but uses the base color. The (base)image that uses TypedLabel should be colored with the (base)color, however, I can't obtain this color: Error: Keyword 'this' is not available in the current context A workaround? /// base class public class PicturedLabel : Label { PictureBox pb = new PictureBox(); public Color LabelColor; public PicturedLabel() { // initialised here in a specific way LabelColor = Color.Red; } public PicturedLabel(Image img) : base() { pb.Image = img; this.Controls.Add(pb); } } public enum LabelType { A, B } /// derived class public class TypedLabel : PicturedLabel { public TypedLabel(LabelType type) : base(GetImageFromType(type, this.LabelColor)) //Error: Keyword 'this' is not available in the current context { } public static Image GetImageFromType(LabelType type, Color c) { Image result = new Bitmap(10, 10); Rectangle rec = new Rectangle(0, 0, 10, 10); Pen pen = new Pen(c); Graphics g = Graphics.FromImage(result); switch (type) { case LabelType.A: g.DrawRectangle(pen, rec); break; case LabelType.B: g.DrawEllipse(pen, rec); break; } return result; } }

    Read the article

  • Issue with IHttpHandler and relative URLs

    - by vtortola
    Hi, I've developed a IHttpHandler class and I've configured it as verb="*" path="*", so I'm handling all the request with it in an attempt of create my own REST implementation for a test web site that generates the html dynamically. So, when a request for a .css file arrives, I've to do something like context.Response.WriteFile(Server.MapPath(url)) ... same for pictures and so on, I have to response everything myself. My main issue, is when I put relative URLs in the anchors; for example, I have main page with a link like this <a href="page1">Go to Page 1</a> , and in Page 1 I have another link <a href="page2">Go to Page 2</a>. Page 1 and 2 are supposed to be at the same level (http://host/page1 and http://host/page2, but when I click in Go to Page 2, I got this url in the handler: ~/page1/~/page2 ... what is a pain, because I have to do an url = url.SubString(url.LastIndexOf('~')) for clean it, although I feel that there is nothing wrong and this behavior is totally normal. Right now, I can cope with it, but I think that in the future this is gonna bring me some headache. I've tried to set all the links with absolute URLs using the information of context.Request.Url, but it's also a pain :D, so I'd like to know if there is a nicer way to do these kind of things. Don't hesitate in giving me pretty obvious responses because I'm pretty new in web development and probably I'm skipping something basic about URLs, Http and so on. Thanks in advance and kind regards.

    Read the article

  • How do you create a non-Thread-based Guice custom Scope?

    - by Russ
    It seems that all Guice's out-of-the-box Scope implementations are inherently Thread-based (or ignore Threads entirely): Scopes.SINGLETON and Scopes.NO_SCOPE ignore Threads and are the edge cases: global scope and no scope. ServletScopes.REQUEST and ServletScopes.SESSION ultimately depend on retrieving scoped objects from a ThreadLocal<Context>. The retrieved Context holds a reference to the HttpServletRequest that holds a reference to the scoped objects stored as named attributes (where name is derived from com.google.inject.Key). Class SimpleScope from the custom scope Guice wiki also provides a per-Thread implementation using a ThreadLocal<Map<Key<?>, Object>> member variable. With that preamble, my question is this: how does one go about creating a non-Thread-based Scope? It seems that something that I can use to look up a Map<Key<?>, Object> is missing, as the only things passed in to Scope.scope() are a Key<T> and a Provider<T>. Thanks in advance for your time.

    Read the article

  • Keyboard hook return different symbols from card reader depends whther my app in focus or not

    - by user363868
    I code WinForm application where one of the input is magnetic stripe card reader (CR). I am using code George Mamaladze's article Processing Global Mouse and Keyboard Hooks in C# on codeproject.com to listen keyboard (USB card reader acts same way as keyboard) and I have weird situation. One card reader CR1 (Unitech MS240-2UG) produces keystroke which I intercept on KeyPress event analyze that I intercept certain patter like %ABCD-6EFJHI? and trigger some logic. Analysis required because user can type something else into application or in another application meanwhile my app is open When I use another card reader CR2 (IdTech IDBM-334133) keystroke intercepted by hook started from number 5 instead of % (It is actually same key on keyboard). Since it is starting sentinel it is very important for me to have ability recognize input from card reader. Moreover if my app running in background and I have focus on Notepad when I swipe card string %ABCD-6EFJHI? appears in Notepad and same way, with proper starting character) intercepted by keyboard hook. If swiped when focus on Form it is 5ABCD-6EFJHI? User who tried app with another card reader has same result as me with CR2. Only CR1 works for me as expected I was looking into Device manager of Windows and both devices use same HID driver supplied by MS. I checked devices though respective software from CR makers and starting and ending sentinels set to % and ? respective on both. I would appreciate and ideas and thoughts as I hit the wall myself Thank you

    Read the article

  • JsonParseException on Valid JSON

    - by user2909602
    I am having an issue calling a RESTful service from my client code. I have written the RESTful service using CXF/Jackson, deployed to localhost, and tested using RESTClient successfully. Below is a snippet of the service code: @POST @Produces("application/json") @Consumes("application/json") @Path("/set/mood") public Response setMood(MoodMeter mm) { this.getMmDAO().insert(mm); return Response.ok().entity(mm).build(); } The model class and dao work successfully and the service itself works fine using RESTClient. However, when I attempt to call this service from Java Script, I get the error below on the server side: Caused by: org.codehaus.jackson.JsonParseException: Unexpected character ('m' (code 109)): expected a valid value (number, String, array, object, 'true', 'false' or 'null') I have copied the client side code below. To make sure it has nothing to do with the JSON data itself, I used a valid JSON string (which works using RESTClient, JSON.parse() method, and JSONLint) in the vars 'json' (string) and 'jsonData' (JSON). Below is the Java Script code: var json = '{"mood_value":8,"mood_comments":"new comments","user_id":5,"point":{"latitude":37.292929,"longitude":38.0323323},"created_dtm":1381546869260}'; var jsonData = JSON.parse(json); $.ajax({ url: 'http://localhost:8080/moodmeter/app/service/set/mood', dataType: 'json', data: jsonData, type: "POST", contentType: "application/json" }); I've seen the JsonParseException a number of times on other threads, but in this case the JSON itself appears to be valid (and tested). Any thoughts are appreciated.

    Read the article

  • postgresql error - ERROR: input is out of range

    - by CaffeineIV
    The function below keeps returning this error message. I thought that maybe the double_precision field type was what was causing this, and I tried to use CAST, but either that's not it, or I didn't do it right... Help? Here's the error: ERROR: input is out of range CONTEXT: PL/pgSQL function "calculate_distance" line 7 at RETURN ********** Error ********** ERROR: input is out of range SQL state: 22003 Context: PL/pgSQL function "calculate_distance" line 7 at RETURN And here's the function: CREATE OR REPLACE FUNCTION calculate_distance(character varying, double precision, double precision, double precision, double precision) RETURNS double precision AS $BODY$ DECLARE earth_radius double precision; BEGIN earth_radius := 3959.0; RETURN earth_radius * acos(sin($2 / 57.2958) * sin($4 / 57.2958) + cos($2/ 57.2958) * cos($4 / 57.2958) * cos(($5 / 57.2958) - ($3 / 57.2958))); END; $BODY$ LANGUAGE 'plpgsql' VOLATILE COST 100; ALTER FUNCTION calculate_distance(character varying, double precision, double precision, double precision, double precision) OWNER TO postgres; //I tried changing (unsuccessfully) that RETURN line to: RETURN CAST( (earth_radius * acos(sin($2 / 57.2958) * sin($4 / 57.2958) + cos($2/ 57.2958) * cos($4 / 57.2958) * cos(($5 / 57.2958) - ($3 / 57.2958))) ) AS text);

    Read the article

  • Unittesting Url.Action (using Rhino Mocks?)

    - by Kristoffer Ahl
    I'm trying to write a test for an UrlHelper extensionmethod that is used like this: Url.Action<TestController>(x => x.TestAction()); However, I can't seem set it up correctly so that I can create a new UrlHelper and then assert that the returned url was the expected one. This is what I've got but I'm open to anything that does not involve mocking as well. ;O) [Test] public void Should_return_Test_slash_TestAction() { // Arrange RouteTable.Routes.Add("TestRoute", new Route("{controller}/{action}", new MvcRouteHandler())); var mocks = new MockRepository(); var context = mocks.FakeHttpContext(); // the extension from hanselman var helper = new UrlHelper(new RequestContext(context, new RouteData()), RouteTable.Routes); // Act var result = helper.Action<TestController>(x => x.TestAction()); // Assert Assert.That(result, Is.EqualTo("Test/TestAction")); } I tried changing it to urlHelper.Action("Test", "TestAction") but it will fail anyway so I know it is not my extensionmethod that is not working. NUnit returns: NUnit.Framework.AssertionException: Expected string length 15 but was 0. Strings differ at index 0. Expected: "Test/TestAction" But was: <string.Empty> I have verified that the route is registered and working and I am using Hanselmans extension for creating a fake HttpContext. Here's what my UrlHelper extentionmethod look like: public static string Action<TController>(this UrlHelper urlHelper, Expression<Func<TController, object>> actionExpression) where TController : Controller { var controllerName = typeof(TController).GetControllerName(); var actionName = actionExpression.GetActionName(); return urlHelper.Action(actionName, controllerName); } public static string GetControllerName(this Type controllerType) { return controllerType.Name.Replace("Controller", string.Empty); } public static string GetActionName(this LambdaExpression actionExpression) { return ((MethodCallExpression)actionExpression.Body).Method.Name; } Any ideas on what I am missing to get it working??? / Kristoffer

    Read the article

  • using JMock to write unit test for a simple spring JDBC DAO

    - by Quincy
    I'm writing an unit test for spring jdbc dao. The method to test is: public long getALong() { return simpleJdbcTemplate.queryForObject("sql query here", new RowMapper<Long>() { public Long mapRow(ResultSet resultSet, int i) throws SQLException { return resultSet.getLong("a_long"); } }); } Here is what I have in the test: public void testGetALong() throws Exception { final Long result = 1000L; context.checking(new Expectations() {{ oneOf(simpleJdbcTemplate).queryForObject("sql_query", new RowMapper<Long>() { public Long mapRow(ResultSet resultSet, int i) throws SQLException { return resultSet.getLong("a_long"); } }); will(returnValue(result)); }}); Long seq = dao.getALong(); context.assertIsSatisfied(); assertEquals(seq, result); } Naturally, the test doesn't work (otherwise, I wouldn't be asking this question here). The problem is the rowmapper in the test is different from the rowmapper in the DAO. So the expectation is not met. I tried to put with around the sql query and with(any(RowMapper.class)) for the rowmapper. It wouldn't work either, complains about "not all parameters were given explicit matchers: either all parameters must be specified by matchers or all must be specified by values, you cannot mix matchers and values"

    Read the article

  • Linq to sql DataContext cannot set load options after results been returned

    - by David Liddle
    I have two tables A and B with a one-to-many relationship respectively. On some pages I would like to get a list of A objects only. On other pages I would like to load A with objects in B attached. This can be handled by setting the load options DataLoadOptions options = new DataLoadOptions(); options.LoadWith<A>(a => a.B); dataContext.LoadOptions = options; The trouble occurs when I first of all view all A's with load options, then go to edit a single A (do not use load options), and after edit return to the previous page. I understand why the error is occurring but not sure how to best get round this problem. I would like the DataContext to be loaded up per request. I thought I was achieving this by using StructureMap to load up my DataContext on a per request basis. This is all part of an n-tier application where my Controllers call Services which in turn call Repositories. ForRequestedType<MyDataContext>() .CacheBy(InstanceScope.PerRequest) .TheDefault.Is.Object(new MyDataContext()); ForRequestedType<IAService>() .TheDefault.Is.OfConcreteType<AService>(); ForRequestedType<IARepository>() .TheDefault.Is.OfConcreteType<ARepository>(); Here is a brief outline of my Repository public class ARepository : IARepository { private MyDataContext db; public ARepository(MyDataContext context) { db = context; } public void SetLoadOptions(DataLoadOptions options) { db.LoadOptions = options; } public IQueryable<A> Get() { return from a in db.A select a; } So my ServiceLayer, on View All, sets the load options and then gets all A's. On editing A my ServiceLayer should spin up a new DataContext and just fetch a list of A's. When sql profiling, I can see that when I go to the Edit page it is requesting A with B objects.

    Read the article

  • LinkedIn API returning extra/incorrect login prompt

    - by Paul Osetinsky
    I have a Rails application running the omniauth-linkedin gem and linkedin gem (essentialy an API wrapper). When a user logs in, they receive a primary login prompt that displays to them the correct scopes (FULL PROFILE and EMAIL ADDRESS), as below: However, after they log in, they get another login prompt that should not come up, and that ignores the initial scope request. It tells them that LinkedIN is only requesting their PROFILE OVERVIEW, which is incorrect: The problem must lie in my auth_controller, and I think it has do to with the url that is created in one of the authentication stages (definitely right after the user enters their LinkedIn authentication credentials). Here is my auth_controller: require 'linkedin' class AuthController < ApplicationController def auth client = LinkedIn::Client.new(ENV['LINKEDIN_KEY'], ENV['LINKEDIN_SECRET']) request_token = client.request_token(:oauth_callback => "http://#{request.host_with_port}/callback") session[:rtoken] = request_token.token session[:rsecret] = request_token.secret redirect_to client.request_token.authorize_url end def callback client = LinkedIn::Client.new(ENV['LINKEDIN_KEY'], ENV['LINKEDIN_SECRET']) if session[:atoken].nil? pin = params[:oauth_verifier] atoken, asecret = client.authorize_from_request(session[:rtoken], session[:rsecret], pin) session[:atoken] = atoken session[:asecret] = asecret @user = current_user @user.uid = client.profile(:fields => ["id"]).id flash.now[:success] = 'Signed in with LinkedIn.' else client.authorize_from_access(session[:atoken], session[:asecret]) @user.uid = client.profile(:fields => ["id"]).id flash.now[:success] = 'Signed in with LinkedIn.' end @user = current_user @user.save redirect_to current_user end end Just in case, here is my omniauth.rb file that states the scopes I am requesting for my application: Rails.application.config.middleware.use OmniAuth::Builder do provider :linkedin, ENV['LINKEDIN_KEY'], ENV['LINKEDIN_SECRET'], :scope => 'r_fullprofile r_emailaddress', :fields => ['id', 'email-address', 'first-name', 'last-name', 'headline', 'industry', 'picture-url', 'public-profile-url', 'location', 'positions', 'educations'] end Can't figure out how to get rid of that second unnecessary and misleading prompt from LinkedIn and would appreciate any guidance! Thank you.

    Read the article

  • Calling SubmitChanges on DataContext does not update database.

    - by drasto
    In C# ASP.NET MVC application I use Link to SQL to provide data for my application. I have got simple database schema like this: In my controller class I reference this data context called Model (as you can see on the right side of picture in properties) like this: private Model model = new Model(); I've got a table (List) of Series rendered on my page. It renders properly and I was able to add delete functionality to delete Series like this: public ActionResult Delete(int id) { model.Series.DeleteOnSubmit(model.Series.SingleOrDefault(s => s.ID == id)); model.SubmitChanges(); return RedirectToAction("Index"); } Where appropriate action link looks like this: <%: Html.ActionLink("Delete", "Delete", new { id=item.ID })%> Also create (implemented in similar way) works fine. However edit does not work. My edit looks like this: public ActionResult Edit(int id) { return View(model.Series.SingleOrDefault(s => s.ID == id)); } [HttpPost] public ActionResult Edit(Series series) { if (ModelState.IsValid) { UpdateModel(series); series.Title = series.Title + " some string to ensure title has changed"; model.SubmitChanges(); return RedirectToAction("Index"); } I have controlled that my database has a primary key set up correctly. I debugged my application and found out that everything works as expected until the line with model.SubmitChanges();. This command does not apply the changes of Title property(or any other) against the database. Please help.

    Read the article

  • VS 2008 does not understand .resource files

    - by Dmitry
    I'm trying to add globalization support to my C# application. According to MSDN, there should be one embedded resource file for neutral culture and satellite DLLs with resource files for other cultures. I've created 2 satellite DLLs without any problems and got my app to automatically load right one using ResourceManager. But I can't embed default neutral culture resource file into my executable. When I remove all satellite DLLs or set culture to some culture I don't have satellite DLL for, I get exception "Could not find any resources appropriate for the specified culture or the neutral culture." when application attempts to create ResourceManager. It looks like VS 2008 does not include my .resource file into main assembly. I've tried different ways to get resource file embedded: compiling it by resgen.exe from text file and adding it to the project; changing its name to add second .resources extension; creating .resx file with same name; etc. And I still don't see the way to get resource file embedded and used by ResourceManager - I'm having same exception. What is the right way to add default neutral culture resource file to application in VS 2008 ?

    Read the article

  • How to use jaxp 3 with jdk 1.6?

    - by Michal
    I'm trying to migrate application from jdk 1.5 to jdk 1.6 without introducing any changes visible to the end user. Application's output is an xml generated using jaxp which is a part of the jdk libraries. Since jaxp versions are different in jdk 1.5 and 1.6, the resulting xml looks different in each version. An example: DatatypeFatory.newInstance().newDuration(60) produces 'PT2H17M0.000S' in jdk 1.5 and 'P0Y0M0DT2H17M0.000S' in jdk 1.6. Both are correct, but i want to avoid any visible changes. Classes like DatatypeFactory have a mechanism which allows specifying which implementation should be used, but it relies on specifying full qualified class name. So theoretically i could download jaxp jars with the same version which is used in jdk 1.5 and let the application use them. Unfortunately the package and class names are the same in both versions, so i would have to somehow tell java to load classes from jar and not jdk. I was trying to put jaxp jars at the beginning of the classpath, but it didn't help. Is it possible to tell java to load classes from external jar and not jdk libraries? Can i solve this problem in any other way? Thanks in advance

    Read the article

  • How to connect a new query script with SSMS add-in?

    - by squillman
    I'm trying to create a SSMS add-in. One of the things I want to do is to create a new query window and programatically connect it to a server instance (in the context of a SQL Login). I can create the new query script window just fine but I can't find how to connect it without first manually connecting to something else (like the Object Explorer). So in other words, if I connect Obect Explorer to a SQL instance manually and then execute the method of my add-in that creates the query window I can connect it using this code: ServiceCache.ScriptFactory.CreateNewBlankScript( Editors.ScriptType.Sql, ServiceCache.ScriptFactory.CurrentlyActiveWndConnectionInfo.UIConnectionInfo, null); But I don't want to rely on CurrentlyActiveWndConnectionInfo.UIConnectionInfo for the connection. I want to set a SQL Login username and password programatically. Does anyone have any ideas? EDIT: I've managed to get the query window connected by setting the last parameter to an instance of System.Data.SqlClient.SqlConnection. However, the connection uses the context of the last login that was connected instead of what I'm trying to set programatically. That is, the user it connects as is the one selected in the Connection Dialog that you get when you click the New Query button and don't have an Object Explorer connected. EDIT2: I'm writing (or hoping to write) an add-in to automatically send a SQL statement and the execution results to our case-tracking system when run against our production servers. One thought I had was to remove write permissions and assign logins through this add-in which will also force the user to enter a case # canceling the statement if it's not there. Another thought I've just had is to inspect the server name in ServiceCache.ScriptFactory.CurrentlyActiveWndConnectionInfo.UIConnectionInfo and compare it to our list of production servers. If it matches and there's no case # then cancel the query.

    Read the article

  • LocalAlloc and LocalRealloc usage

    - by PaulH
    I have a Visual Studio 2008 C++ Windows Mobile 6 application where I'm using a FindFirst() / FindNext() style API to get a collection of items. I do not know how many items will be in the list ahead of time. So, I would like to dynamically allocate an array for these items. Normally, I would use a std::vector<>, but, for other reasons, that's not an option for this application. So, I'm using LocalAlloc() and LocalReAlloc(). What I'm not clear on is if this memory should be marked fixed or moveable. The application runs fine either way. I'm just wondering what's 'correct'. int count = 0; INFO_STRUCT* info = ( INFO_STRUCT* )LocalAlloc( LHND, sizeof( INFO_STRUCT ) ); while( S_OK == GetInfo( &info[ count ] ) { ++count; info = ( INFO_STRUCT* )LocalRealloc( info, sizeof( INFO_STRUCT ) * ( count + 1 ), LHND ); } if( count > 0 ) { // use the data in some interesting way... } LocalFree( info ); Thanks, PaulH

    Read the article

< Previous Page | 729 730 731 732 733 734 735 736 737 738 739 740  | Next Page >