Search Results

Search found 13862 results on 555 pages for 'questions'.

Page 539/555 | < Previous Page | 535 536 537 538 539 540 541 542 543 544 545 546  | Next Page >

  • Using DateTime in a SqlParameter for Stored Procedure, format error

    - by Matt
    I'm trying to call a stored procedure (on a SQL 2005 server) from C#, .NET 2.0 using DateTime as a value to a SqlParameter. The SQL type in the stored procedure is 'datetime'. Executing the sproc from SQL Management Studio works fine. But everytime I call it from C# I get an error about the date format. When I run SQL Profiler to watch the calls, I then copy paste the exec call to see whats going on. These are my observations and notes about what I've attempted: 1) If I pass the DateTime in directly as a DateTime or converted to SqlDateTime, the field is surrounding by a PAIR of single quotes, such as @Date_Of_Birth=N''1/8/2009 8:06:17 PM'' 2) If I pass the DateTime in as a string, I only get the single quotes 3) Using SqlDateTime.ToSqlString() does not result in a UTC formatted datetime string (even after converting to universal time) 4) Using DateTime.ToString() does not result in a UTC formatted datetime string. 5) Manually setting the DbType for the SqlParameter to DateTime does not change the above observations. So, my questions then, is how on earth do I get C# to pass the properly formatted time in the SqlParameter? Surely this is a common use case, why is it so difficult to get working? I can't seem to convert DateTime to a string that is SQL compatable (e.g. '2009-01-08T08:22:45') EDIT RE: BFree, the code to actually execute the sproc is as follows: using (SqlCommand sprocCommand = new SqlCommand(sprocName)) { sprocCommand.Connection = transaction.Connection; sprocCommand.Transaction = transaction; sprocCommand.CommandType = System.Data.CommandType.StoredProcedure; sprocCommand.Parameters.AddRange(parameters.ToArray()); sprocCommand.ExecuteNonQuery(); } To go into more detail about what I have tried: parameters.Add(new SqlParameter("@Date_Of_Birth", DOB)); parameters.Add(new SqlParameter("@Date_Of_Birth", DOB.ToUniversalTime())); parameters.Add(new SqlParameter("@Date_Of_Birth", DOB.ToUniversalTime().ToString())); SqlParameter param = new SqlParameter("@Date_Of_Birth", System.Data.SqlDbType.DateTime); param.Value = DOB.ToUniversalTime(); parameters.Add(param); SqlParameter param = new SqlParameter("@Date_Of_Birth", SqlDbType.DateTime); param.Value = new SqlDateTime(DOB.ToUniversalTime()); parameters.Add(param); parameters.Add(new SqlParameter("@Date_Of_Birth", new SqlDateTime(DOB.ToUniversalTime()).ToSqlString())); Additional EDIT The one I thought most likely to work: SqlParameter param = new SqlParameter("@Date_Of_Birth", System.Data.SqlDbType.DateTime); param.Value = DOB; Results in this value in the exec call as seen in the SQL Profiler @Date_Of_Birth=''2009-01-08 15:08:21:813'' If I modify this to be @Date_Of_Birth='2009-01-08T15:08:21' It works, but it won't parse with pair of single quotes, and it wont convert to a datetime correctly with the space between the date and time and with the milliseconds on the end. Update and Success First and foremost, thank you everyone for the answers. I post this for the sake of completeness and accuracy on SO - because I certainly do not do it for my pride... I had copy/pasted the code above after the request from below. I trimmed things here and there to be concise. Turns out my problem was in the code I left out, which I'm sure any one of you would have spotted in an instant. I had wrapped my sproc calls inside a transaction. Turns out that I was simply not doing transaction.Commit()!!!!! I'm ashamed to say it, but there you have it. I still don't know what's going on with the syntax I get back from the profiler. A coworker watched with his own instance of the profiler from his computer, and it returned proper syntax. Watching the very SAME executions from my profiler showed the incorrect syntax. It acted as a red-herring, making me believe there was a query syntax problem instead of the much more simple and true answer, which was that I need to commit the transaction! I marked an answer below as correct, and threw in some up-votes on others because they did, after all, answer the question, even if they didn't fix my specific (brain lapse) issue. Thanks again for the help.

    Read the article

  • Looking for feedback on a first SAML implementation.

    - by morgancodes
    Hello, I've been tasked with designing a very simple SSO (single sign-on) process. My employer has specified that it should be implimented in SAML. I'd like to create messages that are absolutely as simple as possible while confirming to the SAML spec. I'd be really grateful if some of you would look at my request and response messages and tell me if they make sense for my purpose, if they include anything that doesn't need to be there, and if they are missing anything that does need to be there. Addionally, I'd like to know where in the response I should put additional information about the subject; in particular, the subject's email address. The interaction needs to work as follows: 1) User requests service from service provider at this point, the service provider knows nothing about the user. 2) Service provider requests authentication for user from identity provider 3) User is authenticated/registered by identity provider 4) Identity provider responds to Service provider with authentication success message, PLUS user's email address. Here's what I think the request should be: <?xml version="1.0" encoding="UTF-8"?> <samlp:AuthnRequest xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol" ID="abc" IssueInstant="1970-01-01T00:00:00.000Z" Version="2.0" AssertionConsumerServiceURL="http://www.IdentityProvider.com/loginPage"> <saml:Issuer xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"> http://www.serviceprovider.com </saml:Issuer> <saml:Subject> <saml:NameID Format="urn:oasis:names:tc:SAML:2.0:nameid-format:transient">3f7b3dcf-1674-4ecd-92c8-1544f346baf8</saml:NameID> </saml:Subject> Here's what I think the response should be: <?xml version="1.0" encoding="UTF-8"?> <samlp:Response xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol" Destination="http://www.serviceprovider.com/desitnationURL" ID="123" IssueInstant="2008-11-21T17:13:42.872Z" Version="2.0"> <samlp:Status> <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/> </samlp:Status> <saml:Assertion xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" Version="2.0"> <saml:Subject> <saml:NameID Format="urn:oasis:names:tc:SAML:2.0:nameid-format:transient">3f7b3dcf-1674-4ecd-92c8-1544f346baf8</saml:NameID> <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:profiles:SSO:browser"> <saml:SubjectConfirmationData InResponseTo="abc"/> </saml:SubjectConfirmation> </saml:Subject> <saml:AuthnStatement AuthnInstant="2008-11-21T17:13:42.899Z"> <saml:AuthnContext> <saml:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport</saml:AuthnContextClassRef> </saml:AuthnContext> </saml:AuthnStatement> </saml:Assertion> </samlp:Response> So, again, my questions are: 1) Is this a valid SAML interaction? 2) Can either the request or response xml be simplified? 3) Where in the response should I put the subject's email address? I really apprecaite your help. Thanks so much! -Morgan

    Read the article

  • Entity Framework many-to-many using VB.Net Lambda

    - by bgs264
    Hello, I'm a newbie to StackOverflow so please be kind ;) I'm using Entity Framework in Visual Studio 2010 Beta 2 (.NET framework 4.0 Beta 2). I have created an entity framework .edmx model from my database and I have a handful of many-to-many relationships. A trivial example of my database schema is Roles (ID, Name, Active) Members (ID, DateOfBirth, DateCreated) RoleMembership(RoleID, MemberID) I am now writing the custom role provider (Inheriting System.Configuration.Provider.RoleProvider) and have come to write the implementation of IsUserInRole(username, roleName). The LINQ-to-Entity queries which I wrote, when SQL-Profiled, all produced CROSS JOIN statements when what I want is for them to INNER JOIN. Dim query = From m In dc.Members From r In dc.Roles Where m.ID = 100 And r.Name = "Member" Select m My problem is almost exactly described here: http://stackoverflow.com/questions/553918/entity-framework-and-many-to-many-queries-unusable I'm sure that the solution presented there works well, but whilst I studied Java at uni and I can mostly understand C# I cannot understand this Lambda syntax provided and I need to get a similar example in VB. I've looked around the web for the best part of half a day but I'm not closer to my answer. So please can somebody advise how, in VB, I can construct a LINQ statement which would do this equivalent in SQL: SELECT rm.RoleID FROM RoleMembership rm INNER JOIN Roles r ON r.ID = rm.RoleID INNER JOIN Members m ON m.ID = rm.MemberID WHERE r.Name = 'Member' AND m.ID = 101 I would use this query to see if Member 101 is in Role 3. (I appreciate I probably don't need the join to the Members table in SQL but I imagine in LINQ I'd need to bring in the Member object?) UPDATE: I'm a bit closer by using multiple methods: Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load Dim count As Integer Using dc As New CBLModel.CBLEntities Dim persons = dc.Members.Where(AddressOf myTest) count = persons.Count End Using System.Diagnostics.Debugger.Break() End Sub Function myTest(ByVal m As Member) As Boolean Return m.ID = "100" AndAlso m.Roles.Select(AddressOf myRoleTest).Count > 0 End Function Function myRoleTest(ByVal r As Role) As Boolean Return r.Name = "Member" End Function SQL Profiler shows this: SQL:BatchStarting SELECT [Extent1].[ID] AS [ID], ... (all columns from Members snipped for brevity) ... FROM [dbo].[Members] AS [Extent1] RPC:Completed exec sp_executesql N'SELECT [Extent2].[ID] AS [ID], [Extent2].[Name] AS [Name], [Extent2].[Active] AS [Active] FROM [dbo].[RoleMembership] AS [Extent1] INNER JOIN [dbo].[Roles] AS [Extent2] ON [Extent1].[RoleID] = [Extent2].[ID] WHERE [Extent1].[MemberID] = @EntityKeyValue1',N'@EntityKeyValue1 int',@EntityKeyValue1=100 SQL:BatchCompleted SELECT [Extent1].[ID] AS [ID], ... (all columns from Members snipped for brevity) ... FROM [dbo].[Members] AS [Extent1] I'm not certain why it is using sp_execsql for the inner join statement and why it's still running a select to select ALL members though. Thanks. UPDATE 2 I've written it by turning the above "multiple methods" into lambda expressions then all into one query, like this: Dim allIDs As String = String.Empty Using dc As New CBLModel.CBLEntities For Each retM In dc.Members.Where(Function(m As Member) m.ID = 100 AndAlso m.Roles.Select(Function(r As Role) r.Name = "Doctor").Count > 0) allIDs &= retM.ID.ToString & ";" Next End Using But it doesn't seem to work: "Doctor" is not a role that exists, I just put it in there for testing purposes, yet "allIDs" still gets set to "100;" The SQL in SQL Profiler this time looks like this: SELECT [Project1].* FROM ( SELECT [Extent1].*, (SELECT COUNT(1) AS [A1] FROM [dbo].[RoleMembership] AS [Extent2] WHERE [Extent1].[ID] = [Extent2].[MemberID]) AS [C1] FROM [dbo].[Members] AS [Extent1] ) AS [Project1] WHERE (100 = [Project1].[ID]) AND ([Project1].[C1] > 0) For brevity I turned the list of all the columns from the Members table into * As you can see it's just ignoring the "Role" query... :/

    Read the article

  • Understanding Node.js and concept of non-blocking I/O

    - by Saif Bechan
    Recently I became interested in using Node.js to tackle some of the parts of my web-application. I love the part that its full JavaScript and its very light weight so no use anymore to call an JavaScript-PHP call but a lighter JavaScript-JavaScript call. I however do not understand all the concepts explained. Basic concepts Now in the presentation for Node.js Ryan Dahl talks about non-blocking IO and why this is the way we need to create our programs. I can understand the theoretical concept. You just don't wait for a response, you go ahead and do other things. You make a callback for the response, and when the response arrives millions of clock-cycles later, you can fire that. If you have not already I recommend to watch this presentation. It is very easy to follow and pretty detailed. There are some nice concepts explained on how to write your code in a good manner. There are also some examples given and I am going to work with the basic example given. Examples The way we do thing now: puts("Enter your name: "); var name = gets(); puts("Name: " + name); Now the problem with this is that the code is halted at line 1. It blocks your code. The way we need to do things according to node puts("Enter your name: "); gets(function (name) { puts("Name: " + name); }); Now with this your program does not halt, because the input is a function within the output. So the programs continues to work without halting. Questions Now the basic question I have is how does this work in real-life situations. I am talking here for the use in web-applications. The application I am writing does I/O, bit is still does it in am blocking matter. I think that most of the time, if not all, you need to block, because you have to wait on what the response is you have to work with. When you need to get some information from the database, most of the time this data needs to be verified before you can further with the code. Example 1 If you take a login for example. You have to wait for the database to response to return, because you can not do anything else. I can't see a way around this without blocking. Example 2 Going back to the basic example. The use just request something from a database which does not need any verification. You still have to block because you don't have anything to do more. I can not come up with a single example where you want to do other things while you wait for the response to return. Possible answers I have read that this frees up recourses. When you program like this it takes less CPU or memory usage. So this non-blocking IO is ONLY meant to free up recourses and does not have any other practical use. Not that this is not a huge plus, freeing up recourses is always good. Yet I fail to see this as a good solution. because in both of the above examples, the program has to wait for the response of the user. Whether this is inside a function, or just inline, in my opinion there is a program that wait for input. Resources I looked at I have looked at some recourses before I posted this question. They talk a lot about the theoretical concept, which is quite clear. Yet i fail to see some real-life examples where this is makes a huge difference. Stackoverflow: What is in simple words blocking IO and non-blocking IO? Blocking IO vs non-blocking IO; looking for good articles tidy code for asynchronous IO Other recources: Wikipedia: Asynchronous I/O Introduction to non-blocking I/O The C10K problem

    Read the article

  • C++ custom exceptions: run time performance and passing exceptions from C++ to C

    - by skyeagle
    I am writing a custom C++ exception class (so I can pass exceptions occuring in C++ to another language via a C API). My initial plan of attack was to proceed as follows: //C++ myClass { public: myClass(); ~myClass(); void foo() // throws myException int foo(const int i, const bool b) // throws myException } * myClassPtr; // C API #ifdef __cplusplus extern "C" { #endif myClassPtr MyClass_New(); void MyClass_Destroy(myClassPtr p); void MyClass_Foo(myClassPtr p); int MyClass_FooBar(myClassPtr p, int i, bool b); #ifdef __cplusplus }; #endif I need a way to be able to pass exceptions thrown in the C++ code to the C side. The information I want to pass to the C side is the following: (a). What (b). Where (c). Simple Stack Trace (just the sequence of error messages in order they occured, no debugging info etc) I want to modify my C API, so that the API functions take a pointer to a struct ExceptionInfo, which will contain any exception info (if an exception occured) before consuming the results of the invocation. This raises two questions: Question 1 1. Implementation of each of the C++ methods exposed in the C API needs to be enclosed in a try/catch statement. The performance implications for this seem quite serious (according to this article): "It is a mistake (with high runtime cost) to use C++ exception handling for events that occur frequently, or for events that are handled near the point of detection." At the same time, I remember reading somewhere in my C++ days, that all though exception handling is expensive, it only becmes expensive when an exception actually occurs. So, which is correct?. what to do?. Is there an alternative way that I can trap errors safely and pass the resulting error info to the C API?. Or is this a minor consideration (the article after all, is quite old, and hardware have improved a bit since then). Question 2 I wuld like to modify the exception class given in that article, so that it contains a simple stack trace, and I need some help doing that. Again, in order to make the exception class 'lightweight', I think its a good idea not to include any STL classes, like string or vector (good idea/bad idea?). Which potentially leaves me with a fixed length C string (char*) which will be stack allocated. So I can maybe just keep appending messages (delimted by a unique separator [up to maximum length of buffer])... Its been a while since I did any serious C++ coding, and I will be grateful for the help. BTW, this is what I have come up with so far (I am intentionally, not deriving from std::exception because of the performance reasons mentioned in the article, and I am instead, throwing an integral exception (based on an exception enumeration): class fast_exception { public: fast_exception(int what, char const* file=0, int line=0) : what_(what), line_(line), file_(file) {/*empty*/} int what() const { return what_; } int line() const { return line_; } char const* file() const { return file_; } private: int what_; int line_; char const[MAX_BUFFER_SIZE] file_; }

    Read the article

  • Launching Intent.ACTION_VIEW intent not working on saved image file

    - by Savvas Dalkitsis
    First of all let me say that this questions is slightly connected to another question by me. Actually it was created because of that. I have the following code to write a bitmap downloaded from the net to a file in the sd card: // Get image from url URL u = new URL(url); HttpGet httpRequest = new HttpGet(u.toURI()); HttpClient httpclient = new DefaultHttpClient(); HttpResponse response = (HttpResponse) httpclient.execute(httpRequest); HttpEntity entity = response.getEntity(); BufferedHttpEntity bufHttpEntity = new BufferedHttpEntity(entity); InputStream instream = bufHttpEntity.getContent(); Bitmap bmImg = BitmapFactory.decodeStream(instream); instream.close(); // Write image to a file in sd card File posterFile = new File(Environment.getExternalStorageDirectory().getAbsolutePath()+"/Android/data/com.myapp/files/image.jpg"); posterFile.createNewFile(); BufferedOutputStream out = new BufferedOutputStream(new FileOutputStream(posterFile)); Bitmap mutable = Bitmap.createScaledBitmap(bmImg,bmImg.getWidth(),bmImg.getHeight(),true); mutable.compress(Bitmap.CompressFormat.JPEG, 100, out); out.flush(); out.close(); // Launch default viewer for the file Intent intent = new Intent(); intent.setAction(android.content.Intent.ACTION_VIEW); intent.setDataAndType(Uri.parse(posterFile.getAbsolutePath()),"image/*"); ((Activity) getContext()).startActivity(intent); A few notes. I am creating the "mutable" bitmap after seeing someone using it and it seems to work better than without it. And i am using the parse method on the Uri class and not the fromFile because in my code i am calling these in different places and when i am creating the intent i have a string path instead of a file. Now for my problem. The file gets created. The intent launches a dialog asking me to select a viewer. I have 3 viewers installed. The Astro image viewer, the default media gallery (i have a milstone on 2.1 but on the milestone the 2.1 update did not include the 3d gallery so it's the old one) and the 3d gallery from the nexus one (i found the apk in the wild). Now when i launch the 3 viewers the following happen: Astro image viewer: The activity launches but i see nothing but a black screen. Media Gallery: i get an exception dialog shown "The application Media Gallery (process com.motorola.gallery) has stopped unexpectedly. Please try again" with a force close option. 3D gallery: Everything works as it should. When i try to simply open the file using the Astro file manager (browse to it and simply click) i get the same option dialog but this time things are different: Astro image viewer: Everything works as it should. Media Gallery: Everything works as it should. 3D gallery: The activity launches but i see nothing but a black screen. As you can see everything is a complete mess. I have no idea why this happens but it happens like this every single time. It's not a random bug. Am i missing something when i am creating the intent? Or when i am creating the image file? Any ideas?

    Read the article

  • FMOD.net streaming, callback and exinfo parameters

    - by Tesserex
    I posted a question on gamedev about how to play nsf files (NES console music) in FMOD. It didn't get any results, but since then I made some progress. I decided that the easiest method was just to compile an existing player into a dll and then call it from C# to populate my buffer. The problem now is getting it to sound right, and making sure all my paremeters are correct. Here are the facts so far: The nsf dll is dealing with shorts, so the data is PCM16. The sample nsf I'm using has a playback rate of 60 Hz. Just for playing around now, I'm using a frequency of 48000. Based on 2 and 3, the dll calculates a necessary buffer size of 48000 / 60hz = 800. This means it will render 800 shorts worth of buffer for every simulated NES frame. I've so far got my C# code to play the nsf, at the correct pitch and tempo, but it's very grainy / fuzzy, which I'm attributing to the fact that the FMOD read callback is giving a data length of 1600, whereas I should be expecting 800. I've tried playing around with all the numbers and it either crashes, or the music changes pitch, tempo, or both. Here's some of my C# code: uint channels = 1, frequency = 48000; FMOD.MODE mode = (FMOD.MODE.DEFAULT | FMOD.MODE.OPENUSER | FMOD.MODE.LOOP_NORMAL); FMOD.Sound sound = new FMOD.Sound(); FMOD.CREATESOUNDEXINFO ex = new FMOD.CREATESOUNDEXINFO(); ex.cbsize = Marshal.SizeOf(ex); ex.fileoffset = 0; ex.format = FMOD.SOUND_FORMAT.PCM16; // does this even matter? It doesn't change my results as long as it's long enough for one update ex.length = frequency; ex.numchannels = (int)channels; ex.defaultfrequency = (int)frequency; ex.pcmreadcallback = pcmreadcallback; ex.dlsname = null; // eventually I will calculate this with frequency / nsf hz, but I'm just testing for now ex.decodebuffersize = 800; // from the dll load_nsf_file("file.nsf", 8, (int)frequency); // 8 is the track number to play var result = system.createSound( (string)null, (mode | FMOD.MODE.CREATESTREAM), ref ex, ref sound); channel = new FMOD.Channel(); result = system.playSound(FMOD.CHANNELINDEX.FREE, sound, false, ref channel); private FMOD.RESULT PCMREADCALLBACK(IntPtr soundraw, IntPtr data, uint datalen) { // from the dll process_buffer(data, (int)800); // if I use datalen, it usually crashes (I can't get datalen to = 800 safely) return FMOD.RESULT.OK; } So here are some of my questions: What is the relationship between exinfo.decodebuffersize, frequency, and the datalen parameter of the read callback? With this code sample, it's coming in as 3200. I don't know where that factor of 4 between it and the decodebuffersize comes from. Is datalen in the callback referring to number of bytes, or shorts? The process_buffer function takes a short array and its length. I would expect fmod is talking about shorts as well because I told it PCM16. Maybe my playback quality is bad for some totally different reason. If so I have no idea where to begin solving that. Any ideas there?

    Read the article

  • How do you automap List<float> or float[] with Fluent NHibernate?

    - by Tom Bushell
    Having successfully gotten a sample program working, I'm now starting to do Real Work with Fluent NHibernate - trying to use Automapping on my project's class heirarchy. It's a scientific instrumentation application, and the classes I'm mapping have several properties that are arrays of floats e.g. private float[] _rawY; public virtual float[] RawY { get { return _rawY; } set { _rawY = value; } } These arrays can contain a maximum of 500 values. I didn't expect Automapping to work on arrays, but tried it anyway, with some success at first. Each array was auto mapped to a BLOB (using SQLite), which seemed like a viable solution. The first problem came when I tried to call SaveOrUpdate on the objects containing the arrays - I got "No persister for float[]" exceptions. So my next thought was to convert all my arrays into ILists e.g. public virtual IList<float> RawY { get; set; } But now I get: NHibernate.MappingException: Association references unmapped class: System.Single Since Automapping can deal with lists of complex objects, it never occured to me it would not be able to map lists of basic types. But after doing some Googling for a solution, this seems to be the case. Some people seem to have solved the problem, but the sample code I saw requires more knowledge of NHibernate than I have right now - I didn't understand it. Questions: 1. How can I make this work with Automapping? 2. Also, is it better to use arrays or lists for this application? I can modify my app to use either if necessary (though I prefer lists). Edit: I've studied the code in Mapping Collection of Strings, and I see there is test code in the source that sets up an IList of strings, e.g. public virtual IList<string> ListOfSimpleChildren { get; set; } [Test] public void CanSetAsElement() { new MappingTester<OneToManyTarget>() .ForMapping(m => m.HasMany(x => x.ListOfSimpleChildren).Element("columnName")) .Element("class/bag/element").Exists(); } so this must be possible using pure Automapping, but I've had zero luck getting anything to work, probably because I don't have the requisite knowlege of manually mapping with NHibernate. Starting to think I'm going to have to hack this (by encoding the array of floats as a single string, or creating a class that contains a single float which I then aggregate into my lists), unless someone can tell me how to do it properly. End Edit Here's my CreateSessionFactory method, if that helps formulate a reply... private static ISessionFactory CreateSessionFactory() { ISessionFactory sessionFactory = null; const string autoMapExportDir = "AutoMapExport"; if( !Directory.Exists(autoMapExportDir) ) Directory.CreateDirectory(autoMapExportDir); try { var autoPersistenceModel = AutoMap.AssemblyOf<DlsAppOverlordExportRunData>() .Where(t => t.Namespace == "DlsAppAutomapped") .Conventions.Add( DefaultCascade.All() ) ; sessionFactory = Fluently.Configure() .Database(SQLiteConfiguration.Standard .UsingFile(DbFile) .ShowSql() ) .Mappings(m => m.AutoMappings.Add(autoPersistenceModel) .ExportTo(autoMapExportDir) ) .ExposeConfiguration(BuildSchema) .BuildSessionFactory() ; } catch (Exception e) { Debug.WriteLine(e); } return sessionFactory; }

    Read the article

  • Where to Store the Protection Trial Info for Software Protection Purpose

    - by Peter Lee
    It might be duplicate with other questions, but I swear that I googled a lot and search at StackOverflow.com a lot, and I cannot find the answer to my question: In a C#.Net application, where to store the protection trial info, such as Expiration Date, Number of Used Times? I understand that, all kinds of Software Protection strategies can be cracked by a sophiscated hacker (because they can almost always get around the expiration checking step). But what I'm now going to do is just to protect it in a reasonable manner that a "common"/"advanced" user cannot screw it up. OK, in order to proof that I have googled and searched a lot at StackOverflow.com, I'm listing all the possible strategies I got: 1. Registry Entry First, some users might not have the access to even read the Registry table. Second, if we put the Protection Trial Info in a Registry Entry, the user can always find it out where it is by comparing the differences before and after the software installation. They can just simply change it. OK, you might say that we should encrypt the Protection Trial Info, yes we can do that. But what if the user just change their system date before installing? OK, you might say that we should also put a last-used date, if something is wrong, the last-used date could work as a protection guide. But what if the user just uninstall the software and delete all Registry Entries related to this software, and then reinstall the software? I have no idea on how to deal with this. Please help. A Plain File First, there are some places to put the plain file: 2.a) a simple XML file under software installation path 2.b) configuration file Again, the user can just uninstall the software and remove these plain file(s), and reinstall the software. - The Software Itself If we put the protection trial info (Expiration Date, we cannot put Number of Used Times) in the software itself, it is still susceptible to the cases I mentioned above. Furthermore, it's not even cool to do so. - A Trial Product-Key It works like a licensing process, that is, we put the Trial info into an RSA-signed string. However, it requires too many steps for a user to have a try of using the software (they might lose patience): 4.a) The user downloads the software; 4.b) The user sends an email to request a Trial Product-Key by providing user name (or email) or hardware info; 4.c) The server receives the request, RSA-signs it and send back to the user; 4.d) The user can now use it under the condition of (Expiration Date & Number of Used Times). Now, the server has a record of the user's username or hardware info, so the user will be rejected to request a second trial. Is it legal to collection hardware info? In a word, the user has to do one more extra step (request a Trial Product Key) just for having a try of using the software, which is not cool (thinking myself as a user). NOTE: This question is not about the Licensing, instead, it's about where to store the TRIAL info. After the trial expires, the user should ask for a license (CD-Key/Product-Key). I'm going to use RSA signature (bound to User Hardware)

    Read the article

  • Converting listview to a composite control

    - by Paul
    The link below shows a listview composite control in its most basic form however I can't seem to extend this to what I'm trying to do. http://stackoverflow.com/questions/92689/how-to-define-listview-templates-in-code My listview has 1 tablerow with 2 fields. The first field contains 1 element whilst the second field contains 2 elements as shown below. <asp:ListView ID="lstArticle" runat="server"> <LayoutTemplate> <table id="itemPlaceholder" runat="server"> </table> </LayoutTemplate> <ItemTemplate> <div class="ctrlArticleList_rptStandardItem"> <table width="100%"> <tr> <td valign="top" class="ctrlArticleList_firstColumnWidth"> <a href="<%# GetURL(Container.DataItem) %>"> <%# GetImage(Container.DataItem)%> </a> </td> <td valign="top"> <span class="ctrlArticleList_title"> <a href="<%# GetURL(Container.DataItem) %>"> <%# DataBinder.Eval(Container.DataItem, "Title") %> </a> </span> <span class="ctrlArticleList_date"> <%# convertToShortDate(DataBinder.Eval(Container.DataItem, "ActiveDate"))%> </span> <div class="ctrlArticleList_standFirst"> <%# DataBinder.Eval(Container.DataItem, "StandFirst")%> </div> </td> </tr> </table> </div> </ItemTemplate> So to convert this I have to use the InstantiateIn method of the ITemplate for the layouttemplate and the ItemTemplate. Data is bound to the itemtemplate using the DataBinding event. My question at the moment is how do I setup the ItemTemplate class and bind the values from the above ListView ItemTemplate. This is what I've managed so far: private class LayoutTemplate : ITemplate { public void InstantiateIn(Control container) { var table = new HtmlGenericControl("table") { ID = "itemPlaceholder" }; container.Controls.Add(table); } } private class ItemTemplate : ITemplate { public void InstantiateIn(Control container) { var tableRow = new HtmlGenericControl("tr"); //first field in row var tableFieldImage = new HtmlGenericControl("td"); var imageLink = new HtmlGenericControl("a"); imageLink.ID = "imageLink"; tableRow.Controls.Add(imageLink); // second field in row var tableFieldTitle = new HtmlGenericControl("td"); var title = new HtmlGenericControl("a"); tableFieldTitle.Controls.Add(title); tableRow.Controls.Add(tableFieldTitle); //Bind the data with the controls tableRow.DataBinding += BindData; //add the controls to the container container.Controls.Add(tableRow); } public void BindData(object sender, EventArgs e) { var container = (HtmlGenericControl)sender; var dataItem = ((ListViewDataItem)container.NamingContainer).DataItem; container.Controls.Add(new Literal() { Text = dataItem.ToString() }); } One of the functions in the ListView just for an example is: public string GetURL(object objArticle) { ArticleItem articleItem = (ArticleItem)objArticle; Article article = new Article(Guid.Empty, articleItem.ContentVersionID); return GetArticleURL(article); } Summary What do I need to do to convert the following into a composite control: 1. <a href="<%# GetURL(Container.DataItem) %>"><%# GetImage(Container.DataItem)%></a> 2. <a href="<%# GetURL(Container.DataItem) %>"> <%# DataBinder.Eval(Container.DataItem, "Title") %> </a>

    Read the article

  • Memory Troubles with UIImagePicker

    - by Dan Ray
    I'm building an app that has several different sections to it, all of which are pretty image-heavy. It ties in with my client's website and they're a "high-design" type outfit. One piece of the app is images uploaded from the camera or the library, and a tableview that shows a grid of thumbnails. Pretty reliably, when I'm dealing with the camera version of UIImagePickerControl, I get hit for low memory. If I bounce around that part of the app for a while, I occasionally and non-repeatably crash with "status:10 (SIGBUS)" in the debugger. On low memory warning, my root view controller for that aspect of the app goes to my data management singleton, cruises through the arrays of cached data, and kills the biggest piece, the image associated with each entry. Thusly: - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Low Memory Warning" message:@"Cleaning out events data" delegate:nil cancelButtonTitle:@"All right then." otherButtonTitles:nil]; [alert show]; [alert release]; NSInteger spaceSaved; DataManager *data = [DataManager sharedDataManager]; for (Event *event in data.eventList) { spaceSaved += [(NSData *)UIImagePNGRepresentation(event.image) length]; event.image = nil; spaceSaved -= [(NSData *)UIImagePNGRepresentation(event.image) length]; } NSString *titleString = [NSString stringWithFormat:@"Saved %d on event images", spaceSaved]; for (WondrMark *mark in data.wondrMarks) { spaceSaved += [(NSData *)UIImagePNGRepresentation(mark.image) length]; mark.image = nil; spaceSaved -= [(NSData *)UIImagePNGRepresentation(mark.image) length]; } NSString *messageString = [NSString stringWithFormat:@"And total %d on event and mark images", spaceSaved]; NSLog(@"%@ - %@", titleString, messageString); // Relinquish ownership any cached data, images, etc that aren't in use. } As you can see, I'm making a (poor) attempt to eyeball the memory space I'm freeing up. I know it's not telling me about the actual memory footprint of the UIImages themselves, but it gives me SOME numbers at least, so I can see that SOMETHING'S happening. (Sorry for the hamfisted way I build that NSLog message too--I was going to fire another UIAlertView, but realized it'd be more useful to log it.) Pretty reliably, after toodling around in the image portion of the app for a while, I'll pull up the camera interface and get the low memory UIAlertView like three or four times in quick succession. Here's the NSLog output from the last time I saw it: 2010-05-27 08:55:02.659 EverWondr[7974:207] Saved 109591 on event images - And total 1419756 on event and mark images wait_fences: failed to receive reply: 10004003 2010-05-27 08:55:08.759 EverWondr[7974:207] Saved 4 on event images - And total 392695 on event and mark images 2010-05-27 08:55:14.865 EverWondr[7974:207] Saved 4 on event images - And total 873419 on event and mark images 2010-05-27 08:55:14.969 EverWondr[7974:207] Saved 4 on event images - And total 4 on event and mark images 2010-05-27 08:55:15.064 EverWondr[7974:207] Saved 4 on event images - And total 4 on event and mark images And then pretty soon after that we get our SIGBUS exit. So that's the situation. Now my specific questions: THE time I see this happening is when the UIPickerView's camera iris shuts. I click the button to take the picture, it does the "click" animation, and Instruments shows my memory footprint going from about 10mb to about 25mb, and sitting there until the image is delivered to my UIViewController, where usage drops back to 10 or 11mb again. If we make it through that without a memory warning, we're golden, but most likely we don't. Anything I can do to make that not be so expensive? Second, I have NSZombies enabled. Am I understanding correctly that that's actually preventing memory from being freed? Am I subjecting my app to an unfair test environment? Third, is there some way to programmatically get my memory usage? Or at least the usage for a UIImage object? I've scoured the docs and don't see anything about that.

    Read the article

  • Why do I get the error "Only antlib URIs can be located from the URI alone,not the URI" when trying to run hibernate tools in my build.xml

    - by Casbah
    I'm trying to run hibernate tools in an ant build to generate ddl from my JPA annotations. Ant dies on the taskdef tag. I've tried with ant 1.7, 1.6.5, and 1.6 to no avail. I've tried both in eclipse and outside. I've tried including all the hbn jars in the hibernate-tools path and not. Note that I based my build file on this post: http://stackoverflow.com/questions/281890/hibernate-jpa-to-ddl-command-line-tools I'm running eclipse 3.4 with WTP 3.0.1 and MyEclipse 7.1 on Ubuntu 8. Build.xml: <project name="generateddl" default="generate-ddl"> <path id="hibernate-tools"> <pathelement location="../libraries/hibernate-tools/hibernate-tools.jar" /> <pathelement location="../libraries/hibernate-tools/bsh-2.0b1.jar" /> <pathelement location="../libraries/hibernate-tools/freemarker.jar" /> <pathelement location="../libraries/jtds/jtds-1.2.2.jar" /> <pathelement location="../libraries/hibernate-tools/jtidy-r8-20060801.jar" /> </path> <taskdef classname="org.hibernate.tool.ant.HibernateToolTask" classpathref="hibernate-tools"/> <target name="generate-ddl" description="Export schema to DDL file"> <!-- compile model classes before running hibernatetool --> <!-- task definition; project.class.path contains all necessary libs <taskdef name="hibernatetool" classname="org.hibernate.tool.ant.HibernateToolTask" classpathref="project.class.path" /> --> <hibernatetool destdir="sql"> <!-- check that directory exists --> <jpaconfiguration persistenceunit="default" /> <classpath> <dirset dir="WebRoot/WEB-INF/classes"> <include name="**/*"/> </dirset> </classpath> <hbm2ddl outputfilename="schemaexport.sql" format="true" export="false" drop="true" /> </hibernatetool> </target> Error message (ant -v): Apache Ant version 1.7.0 compiled on December 13 2006 Buildfile: /home/joe/workspace/bento/ant-generate-ddl.xml parsing buildfile /home/joe/workspace/bento/ant-generate-ddl.xml with URI = file:/home/joe/workspace/bento/ant-generate-ddl.xml Project base dir set to: /home/joe/workspace/bento [antlib:org.apache.tools.ant] Could not load definitions from resource org/apache/tools/ant/antlib.xml. It could not be found. BUILD FAILED /home/joe/workspace/bento/ant-generate-ddl.xml:12: Only antlib URIs can be located from the URI alone,not the URI at org.apache.tools.ant.taskdefs.Definer.execute(Definer.java:216) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:288) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:105) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:357) at org.apache.tools.ant.helper.ProjectHelper2.parse(ProjectHelper2.java:140) at org.eclipse.ant.internal.ui.antsupport.InternalAntRunner.parseBuildFile(InternalAntRunner.java:191) at org.eclipse.ant.internal.ui.antsupport.InternalAntRunner.run(InternalAntRunner.java:400) at org.eclipse.ant.internal.ui.antsupport.InternalAntRunner.main(InternalAntRunner.java:137) Total time: 195 milliseconds

    Read the article

  • codes to convert from avi to asf

    - by George2
    Hello everyone, No matter what library/SDK to use, I want to convert from avi to asf very quickly (I could even sacrifice some quality of video and audio). I am working on Windows platform (Vista and 2008 Server), better .Net SDK/code, C++ code is also fine. :-) I learned from the below link, that there could be a very quick way to convert from avi to asf to support streaming better, as mentioned "could convert the video from AVI to ASF format using a simple copy (i.e. the content is the same, but container changes).". My question is after some hours of study and trial various SDK/tools, as a newbie, I do not know how to begin with so I am asking for reference sample code to do this task. :-) (as this is a different issue, we decide to start a new topic. :-) ) http://stackoverflow.com/questions/743220/streaming-avi-file-issue thanks in advance, George EDIT 1: I have tried to get the binary of ffmpeg from, http://ffmpeg.arrozcru.org/autobuilds/ffmpeg-latest-mingw32-static.tar.bz2 then run the following command, C:\software\ffmpeg-latest-mingw32-static\bin>ffmpeg.exe -i test.avi -acodec copy -vcodec copy test.asf FFmpeg version SVN-r18506, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --enable-memalign-hack --prefix=/mingw --cross-prefix=i686-ming w32- --cc=ccache-i686-mingw32-gcc --target-os=mingw32 --arch=i686 --cpu=i686 --e nable-avisynth --enable-gpl --enable-zlib --enable-bzlib --enable-libgsm --enabl e-libfaac --enable-pthreads --enable-libvorbis --enable-libmp3lame --enable-libo penjpeg --enable-libtheora --enable-libspeex --enable-libxvid --enable-libfaad - -enable-libschroedinger --enable-libx264 libavutil 50. 3. 0 / 50. 3. 0 libavcodec 52.25. 0 / 52.25. 0 libavformat 52.32. 0 / 52.32. 0 libavdevice 52. 2. 0 / 52. 2. 0 libswscale 0. 7. 1 / 0. 7. 1 built on Apr 14 2009 04:04:47, gcc: 4.2.4 Input #0, avi, from 'test.avi': Duration: 00:00:44.86, start: 0.000000, bitrate: 5291 kb/s Stream #0.0: Video: msvideo1, rgb555le, 1280x1024, 5 tbr, 5 tbn, 5 tbc Stream #0.1: Audio: pcm_s16le, 22050 Hz, mono, s16, 352 kb/s Output #0, asf, to 'test.asf': Stream #0.0: Video: CRAM / 0x4D415243, rgb555le, 1280x1024, q=2-31, 1k tbn, 5 tbc Stream #0.1: Audio: pcm_s16le, 22050 Hz, mono, s16, 352 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop encoding frame= 224 fps=222 q=-1.0 Lsize= 29426kB time=44.80 bitrate=5380.7kbits/s video:26910kB audio:1932kB global headers:0kB muxing overhead 2.023317% C:\software\ffmpeg-latest-mingw32-static\bin> http://www.microsoft.com/windows/windowsmedia/player/webhelp/default.aspx?&mpver=11.0.6001.7000&id=C00D11B1&contextid=230&originalid=C00D36E6 then have the following error when using Windows Media Player to play it, does anyone have any ideas? http://www.microsoft.com/windows/windowsmedia/player/webhelp/default.aspx?&mpver=11.0.6001.7000&id=C00D11B1&contextid=230&originalid=C00D36E6

    Read the article

  • Need help with BOOST_FOREACH/compiler bug

    - by Jacek Lawrynowicz
    I know that boost or compiler should be last to blame, but I can't see another explanation here. I'm using msvc 2008 SP1 and boost 1.43. In the following code snippet execution never leaves third BOOST_FOREACH loop typedef Graph<unsigned, unsigned>::VertexIterator Iter; Graph<unsigned, unsigned> g; g.createVertex(0x66); // works fine Iter it = g.getVertices().first, end = g.getVertices().second; for(; it != end; ++it) ; // fine std::pair<Iter, Iter> p = g.getVertices(); BOOST_FOREACH(unsigned handle, p) ; // fine unsigned vertex_count = 0; BOOST_FOREACH(unsigned handle, g.getVertices()) vertex_count++; // oops, infinite loop vertex_count = 0; BOOST_FOREACH(unsigned handle, g.getVertices()) vertex_count++; vertex_count = 0; BOOST_FOREACH(unsigned handle, g.getVertices()) vertex_count++; // ... last block repeated 7 times Iterator code: class Iterator : public boost::iterator_facade<Iterator, unsigned const, boost::bidirectional_traversal_tag> { public: Iterator() : list(NULL), handle(INVALID_ELEMENT_HANDLE) {} explicit Iterator(const VectorElementsList &list, unsigned handle = INVALID_ELEMENT_HANDLE) : list(&list), handle(handle) {} friend std::ostream& operator<<(std::ostream &s, const Iterator &it) { s << "[list: " << it.list <<", handle: " << it.handle << "]"; return s; } private: friend class boost::iterator_core_access; void increment() { handle = list->getNext(handle); } void decrement() { handle = list->getPrev(handle); } unsigned const& dereference() const { return handle; } bool equal(Iterator const& other) const { return handle == other.handle && list == other.list; } const VectorElementsList<T> *list; unsigned handle; }; Some ASM fun: vertex_count = 0; BOOST_FOREACH(unsigned handle, g.getVertices()) // initialization 013E1369 mov edi,dword ptr [___defaultmatherr+8 (13E5034h)] // end iterator handle: 0xFFFFFFFF 013E136F mov ebp,dword ptr [esp+0ACh] // begin iterator handle: 0x0 013E1376 lea esi,[esp+0A8h] // begin iterator list pointer 013E137D mov ebx,esi 013E137F nop // forever loop begin 013E1380 cmp ebp,edi 013E1382 jne main+238h (13E1388h) 013E1384 cmp ebx,esi 013E1386 je main+244h (13E1394h) 013E1388 lea eax,[esp+18h] 013E138C push eax // here iterator is incremented in ram 013E138D call boost::iterator_facade<detail::VectorElementsList<Graph<unsigned int,unsigned int>::VertexWrapper>::Iterator,unsigned int const ,boost::bidirectional_traversal_tag,unsigned int const &,int>::operator++ (13E18E0h) 013E1392 jmp main+230h (13E1380h) vertex_count++; // forever loop end It's easy to see that iterator handle is cached in EBP and it never gets incremented despite of a call to iterator operator++() function. I've replaced Itarator implmentation with one deriving from std::iterator and the issue persisted, so this is not iterator_facade fault. This problem exists only on msvc 2008 SP1 x86 and amd64 release builds. Debug builds on msvc 2008 and debug/release builds on msvc 2010 and gcc 4.4 (linux) works fine. Furthermore the BOOST_FOREACH block must be repeaded exacly 10 times. If it's repeaded 9 times, it's all OK. I guess that due to BOOST_FOREACH use of template trickery (const auto_any), compiler assumes that iterator handle is constant and never reads its real value again. I would be very happy to hear that my code is wrong, correct it and move on with BOOST_FOREACH, which I'm very found of (as opposed to BOOST_FOREVER :). May be related to: http://stackoverflow.com/questions/1275852/why-does-boost-foreach-not-work-sometimes-with-c-strings

    Read the article

  • Forms Authentication works on dev server but not production server (same SQL db)

    - by Desmond
    Hi, I have the same problem as a previously solved question however, this solution did not help me. I have posted the previous question and answer below: http://stackoverflow.com/questions/2215963/forms-authentication-works-on-dev-server-but-not-production-server-same-sql-db/2963985#2963985 Question: I've never had this problem before, I'm at a total loss. I have a SQL Server 2008 database with ASP.NET Forms Authentication, profiles and roles created and is functional on the development workstation. I can login using the created users without problem. I back up the database on the development computer and restore it on the production server. I xcopy the DLLs and ASP.NET files to the server. I make the necessary changes in the web.config, changing the SQL connection strings to point to the production server database and upload it. I've made sure to generate a machine key and it is the same on both the development web.config and the production web.config. And yet, when I try to login on the production server, the same user that I'm able to login successfully with on the development computer, fails on the production server. There is other content in the database, the schema generated by FluentNHibernate. This content is able to be queried successfully on both development and production servers. This is mind boggling, I believe I've verified everything, but obviously it is still not working and I must have missed something. Please, any ideas? Answer: I ran into a problem with similar symptoms at one point by forgetting to set the applicationName attribute in the web.config under the membership providers element. Users are associated to a specific application. Since I didn't set the applicationName, it defaulted to the application path (something like "/MyApplication"). When it was moved to production, the path changed (for example to "/WebSiteFolder/SomeSubFolder /MyApplication"), so the application name defaulted to the new production path and an association could not be made to the original user accounts that were set up in development. Could your issues possibly be the same as mine? I have this already in my web.config but still get the issue. Any ideas? <membership> <providers> <clear/> <add name="AspNetSqlMembershipProvider" type="System.Web.Security.SqlMembershipProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" connectionStringName="ApplicationServices" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="false" requiresUniqueEmail="false" passwordFormat="Hashed" maxInvalidPasswordAttempts="5" minRequiredPasswordLength="6" minRequiredNonalphanumericCharacters="0" passwordAttemptWindow="10" passwordStrengthRegularExpression="" applicationName="/"/> </providers> </membership> <profile> <providers> <clear/> <add name="AspNetSqlProfileProvider" type="System.Web.Profile.SqlProfileProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" connectionStringName="ApplicationServices" applicationName="/"/> </providers> </profile> <roleManager enabled="false"> <providers> <clear/> <add connectionStringName="ApplicationServices" applicationName="/" name="AspNetSqlRoleProvider" type="System.Web.Security.SqlRoleProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"/> <add applicationName="/" name="AspNetWindowsTokenRoleProvider" type="System.Web.Security.WindowsTokenRoleProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"/> </providers> </roleManager> Any help is greatly appriciated.

    Read the article

  • What is good practice in .NET system architecture design concerning multiple models and aggregates

    - by BuzzBubba
    I'm designing a larger enterprise architecture and I'm in a doubt about how to separate the models and design those. There are several points I'd like suggestions for: - models to define - way to define models Currently my idea is to define: Core (domain) model Repositories to get data to that domain model from a database or other store Business logic model that would contain business logic, validation logic and more specific versions of forms of data retrieval methods View models prepared for specifically formated data output that would be parsed by views of different kind (web, silverlight, etc). For the first model I'm puzzled at what to use and how to define the mode. Should this model entities contain collections and in what form? IList, IEnumerable or IQueryable collections? - I'm thinking of immutable collections which IEnumerable is, but I'd like to avoid huge data collections and to offer my Business logic layer access with LINQ expressions so that query trees get executed at Data level and retrieve only really required data for situations like the one when I'm retrieving a very specific subset of elements amongst thousands or hundreds of thousands. What if I have an item with several thousands of bids? I can't just make an IEnumerable collection of those on the model and then retrieve an item list in some Repository method or even Business model method. Should it be IQueryable so that I actually pass my queries to Repository all the way from the Business logic model layer? Should I just avoid collections in my domain model? Should I void only some collections? Should I separate Domain model and BusinessLogic model or integrate those? Data would be dealt trough repositories which would use Domain model classes. Should repositories be used directly using only classes from domain model like data containers? This is an example of what I had in mind: So, my Domain objects would look like (e.g.) public class Item { public string ItemName { get; set; } public int Price { get; set; } public bool Available { get; set; } private IList<Bid> _bids; public IQueryable<Bid> Bids { get { return _bids.AsQueryable(); } private set { _bids = value; } } public AddNewBid(Bid newBid) { _bids.Add(new Bid {.... } } Where Bid would be defined as a normal class. Repositories would be defined as data retrieval factories and used to get data into another (Business logic) model which would again be used to get data to ViewModels which would then be rendered by different consumers. I would define IQueryable interfaces for all aggregating collections to get flexibility and minimize data retrieved from real data store. Or should I make Domain Model "anemic" with pure data store entities and all collections define for business logic model? One of the most important questions is, where to have IQueryable typed collections? - All the way from Repositories to Business model or not at all and expose only solid IList and IEnumerable from Repositories and deal with more specific queries inside Business model, but have more finer grained methods for data retrieval within Repositories. So, what do you think? Have any suggestions?

    Read the article

  • Java Fx Data bind not working with File Read

    - by rjha94
    Hi I am using a very simple JavaFx client to upload files. I read the file in chunks of 1 MB (using Byte Buffer) and upload using multi part POST to a PHP script. I want to update the progress bar of my client to show progress after each chunk is uploaded. The calculations for upload progress look correct but the progress bar is not updated. I am using bind keyword. I am no JavaFx expert and I am hoping some one can point out my mistake. I am writing this client to fix the issues posted here (http://stackoverflow.com/questions/2447837/upload-1gb-files-using-chunking-in-php) /* * Main.fx * * Created on Mar 16, 2010, 1:58:32 PM */ package webgloo; import javafx.stage.Stage; import javafx.scene.Scene; import javafx.scene.layout.VBox; import javafx.geometry.VPos; import javafx.scene.control.Button; import javafx.scene.control.Label; import javafx.scene.layout.HBox; import javafx.scene.layout.LayoutInfo; import javafx.scene.text.Font; import javafx.scene.control.ProgressBar; import java.io.FileInputStream; /** * @author rajeev jha */ var totalBytes:Float = 1; var bytesWritten:Float = 0; var progressUpload:Float; var uploadURI = "http://www.test1.com/test/receiver.php"; var postMax = 1024000 ; function uploadFile(inputFile: java.io.File) { totalBytes = inputFile.length(); bytesWritten = 1; println("To-Upload - {totalBytes}"); var is = new FileInputStream(inputFile); var fc = is.getChannel(); //1 MB byte buffer var chunkCount = 0; var bb = java.nio.ByteBuffer.allocate(postMax); while(fc.read(bb) >= 0){ println("loop:start"); bb.flip(); var limit = bb.limit(); var bytes = GigaFileUploader.getBufferBytes(bb.array(), limit); var content = GigaFileUploader.createPostContent(inputFile.getName(), bytes); GigaFileUploader.upload(uploadURI, content); bytesWritten = bytesWritten + limit ; progressUpload = 1.0 * bytesWritten / totalBytes ; println("Progress is - {progressUpload}"); chunkCount++; bb.clear(); println("loop:end"); } } var label = Label { font: Font { size: 12 } text: bind "Uploaded - {bytesWritten * 100 / (totalBytes)}%" layoutInfo: LayoutInfo { vpos: VPos.CENTER maxWidth: 120 minWidth: 120 width: 120 height: 30 } } def jFileChooser = new javax.swing.JFileChooser(); jFileChooser.setApproveButtonText("Upload"); var button = Button { text: "Upload" layoutInfo: LayoutInfo { width: 100 height: 30 } action: function () { var outputFile = jFileChooser.showOpenDialog(null); if (outputFile == javax.swing.JFileChooser.APPROVE_OPTION) { uploadFile(jFileChooser.getSelectedFile()); } } } var hBox = HBox { spacing: 10 content: [label, button] } var progressBar = ProgressBar { progress: bind progressUpload layoutInfo: LayoutInfo { width: 240 height: 30 } } var vBox = VBox { spacing: 10 content: [hBox, progressBar] layoutX: 10 layoutY: 10 } Stage { title: "Upload File" width: 270 height: 120 scene: Scene { content: [vBox] } resizable: false }

    Read the article

  • ASP.net roles and Projects

    - by Zyphrax
    EDIT - Rewrote my original question to give a bit more information Background info At my work I'm working on a ASP.Net web application for our customers. In our implementation we use technologies like Forms authentication with MembershipProviders and RoleProviders. All went well until I ran into some difficulties with configuring the roles, because the roles aren't system-wide, but related to the customer accounts and projects. I can't name our exact setup/formula, because I think our company wouldn't approve that... What's a customer / project? Our company provides management information for our customers on a yearly (or other interval) basis. In our systems a customer/contract consists of: one Account: information about the Company per Account, one or more Products: the bundle of management information we'll provide per Product, one or more Measurements: a period of time, in which we gather and report the data Extranet site setup Eventually we want all customers to be able to access their management information with our online system. The extranet consists of two sites: Company site: provides an overview of Account information and the Products Measurement site: after selecting a Measurement, detailed information on that period of time The measurement site is the most interesting part of the extranet. We will create submodules for new overviews, reports, managing and maintaining resources that are important for the research. Our Visual Studio solution consists of a number of projects. One web application named Portal for the basis. The sites and modules are virtual directories within that application (makes it easier to share MasterPages among things). What kind of roles? The following users (read: roles) will be using the system: Admins: development users :) (not customer related, full access) Employees: employees of our company (not customer related, full access) Customer SuperUser: top level managers (full access to their account/measurement) Customer ContactPerson: primary contact (full access to their measurement(s)) Customer Manager: a department manager (limited access, specific data of a measurement) What about ASP.Net users? The system will have many ASP.Net users, let's focus on the customer users: Users are not shared between Accounts SuperUser X automatically has access to all (and new) measurements User Y could be Primary contact for Measurement 1, but have no role for Measurement 2 User Y could be Primary contact for Measurement 1, but have a Manager role for Measurement 2 The department managers are many individual users (per Measurement), if Manager Z had a login for Measurement 1, we would like to use that login again if he participates in Measurement 2. URL structure These are typical urls in our application: http://host/login - the login screen http://host/project - the account/product overview screen (measurement selection) http://host/project/1000 - measurement (id:1000) details http://host/project/1000/planning - planning overview (for primary contact/superuser) http://host/project/1000/reports - report downloads (manager department X can only access report X) We will also create a document url, where you can request a specific document by it's GUID. The system will have to check if the user has rights to the document. The document is related to a Measurement, the User or specific roles have specific rights to the document. What's the problem? (finally ;)) Roles aren't enough to determine what a user is allowed to see/access/download a specific item. It's not enough to say that a certain navigation item is accessible to Managers. When the user requests Measurement 1000, we have to check that the user not only has a Manager role, but a Manager role for Measurement 1000. Summarized: How can we limit users to their accounts/measurements? (remember superusers see all measurements, some managers only specific measurements) How can we apply roles at a product/measurement level? (user X could be primarycontact for measurement 1, but just a manager for measurement 2) How can we limit manager access to the reports screen and only to their department's reports? All with the magic of asp.net classes, perhaps with a custom roleprovider implementation. Similar Stackoverflow question/problem http://stackoverflow.com/questions/1367483/asp-net-how-to-manage-users-with-different-types-of-roles

    Read the article

  • More outlook VSTO help...

    - by Jerry
    I posted an article here (http://stackoverflow.com/questions/1095195/how-do-i-set-permissions-on-my-vsto-outlook-add-in) and I was able to build my installer. I thought that once the installer built itself, everything would work fine. I was wrong. It works on about half of the PC's I've run the installer on. My problem is that the other half doesn't work. I'm trying to install an add-in to Outlook Office 2003. I've even gone so far as to create the steps manually by using a batch file. Nothing seems to work on these PCs and I can't find a common denominator that I can rule out or in that will make the VSTO Addin work. Here is the batch file I am using. What am I doing/not-doing wrong with this? I could really use a VSTO expert's help. Thanks!!!! EDIT I've changed the batch file and registry settings to reflect recent updates to them. I've also attached the error text that comes from the PCs that don't work. @echo off echo Installing Visual Studio for Office Runtime (SE 2005)... ..\VSTO\vstor.exe echo Creating Directories... mkdir "c:\program files\Project Archiver" echo Installying Add-In... echo Copying files... xcopy /Y *.dll "c:\program files\Project Archiver" xcopy /Y *.manifest "c:\program files\Project Archiver" echo Setting Security... "C:\Windows\Microsoft.NET\Framework\v2.0.50727\caspol.exe" -polchgprompt off "C:\Windows\Microsoft.NET\Framework\v2.0.50727\caspol.exe" -u -ag All_Code -url "c:\program files\Project Archiver\ProjectArchiver.dll" FullTrust -n "Project Archiver" -d "Outlook plugin for archiving" "C:\Windows\Microsoft.NET\Framework\v2.0.50727\caspol.exe" -u -ag All_Code -url "c:\program files\Project Archiver\Microsoft.Office.Interop.SmartTags.dll" FullTrust -n "Project Archiver" -d "Outlook plugin for archiving" "C:\Windows\Microsoft.NET\Framework\v2.0.50727\caspol.exe" -polchgprompt on echo Loading Registry Values... "c:\program files\Project Archiver\VSTO_settings.reg" echo "That should do it." pause I took the Registry settings (mentioned in the batch file above) straight from a PC that this application worked on. The VSTO Registry settings I am using are : Windows Registry Editor Version 5.00 [HKEY_CLASSES_ROOT\ProjectArchiver\CLSID] @="{27830B8D-F7A1-4945-AC4A-47661B9ED36D}" [HKEY_CLASSES_ROOT\CLSID\{27830B8D-F7A1-4945-AC4A-47661B9ED36D}] @="ProjectArchiver -- an addin created with VSTO technology" [HKEY_CLASSES_ROOT\CLSID\{27830B8D-F7A1-4945-AC4A-47661B9ED36D}\InprocServer32] @=hex(2):25,00,43,00,6f,00,6d,00,6d,00,6f,00,6e,00,50,00,72,00,6f,00,67,00,72,\ 00,61,00,6d,00,46,00,69,00,6c,00,65,00,73,00,25,00,5c,00,4d,00,69,00,63,00,\ 72,00,6f,00,73,00,6f,00,66,00,74,00,20,00,53,00,68,00,61,00,72,00,65,00,64,\ 00,5c,00,56,00,53,00,54,00,4f,00,5c,00,38,00,2e,00,30,00,5c,00,41,00,64,00,\ 64,00,69,00,6e,00,4c,00,6f,00,61,00,64,00,65,00,72,00,2e,00,64,00,6c,00,6c,\ 00,00,00 "ManifestName"="ProjectArchiver.dll.manifest" "ThreadingModel"="Both" "ManifestLocation"="C:\\Program Files\\Project Archiver\\" [HKEY_CLASSES_ROOT\CLSID\{27830B8D-F7A1-4945-AC4A-47661B9ED36D}\ProgID] @="ProjectArchiver" [HKEY_CLASSES_ROOT\CLSID\{27830B8D-F7A1-4945-AC4A-47661B9ED36D}\Programmable] [HKEY_CLASSES_ROOT\CLSID\{27830B8D-F7A1-4945-AC4A-47661B9ED36D}\VersionIndependentProgID] @="ProjectArchiver" [HKEY_CLASSES_ROOT\ProjectArchiver] @="" [HKEY_LOCAL_MACHINE\Software\Classes\CLSID\{27830B8D-F7A1-4945-AC4A-47661B9ED36D}] @="ProjectArchiver -- an addin created with VSTO technology" [HKEY_LOCAL_MACHINE\Software\Classes\CLSID\{27830B8D-F7A1-4945-AC4A-47661B9ED36D}\InprocServer32] @=hex(2):25,00,43,00,6f,00,6d,00,6d,00,6f,00,6e,00,50,00,72,00,6f,00,67,00,72,\ 00,61,00,6d,00,46,00,69,00,6c,00,65,00,73,00,25,00,5c,00,4d,00,69,00,

    Read the article

  • PHP curl post to login to wordpress

    - by Sadi
    I followed http://stackoverflow.com/questions/724107 to login to wordpress, using php_curl, and it works fine as far I use WAMP, (apache/php). But when it comes to IIS on the dedicated server, it returns nothing. I have wrote the following function which is working fine on my local wamp, but when deployed to client's dedicated windows server 2k3, it doesn't. Please help me function post_url($url, array $query_string) { //$url = http://myhost.com/wptc/sys/wp/wp-login.php /* $query_string = array( 'log'=>'admin', 'pwd'=>'test', 'redirect_to'=>'http://google.com', 'wp-submit'=>'Log%20In', 'testcookie'=>1 ); */ //temp_dir is defined as folder = path/to/a/folder $cookie= temp_dir."cookie.txt"; $c = curl_init($url); if (count($query_string)) { curl_setopt ($c, CURLOPT_POSTFIELDS, http_build_query( $query_string ) ); } curl_setopt($c, CURLOPT_POST, 1); curl_setopt($c, CURLOPT_COOKIEFILE, $cookie); //curl_setopt($c, CURLOPT_SSL_VERIFYPEER, 1); //curl_setopt($c, CURLOPT_USERAGENT, "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.6) Gecko/20070725 Firefox/2.0.0.6"); curl_setopt($c, CURLOPT_TIMEOUT, 60); curl_setopt($c, CURLOPT_FOLLOWLOCATION, 1); curl_setopt($c, CURLOPT_RETURNTRANSFER, 1); //return the content curl_setopt($c, CURLOPT_COOKIEJAR, $cookie); //curl_setopt($c, CURLOPT_AUTOREFERER, 1); //curl_setopt($c, CURLOPT_REFERER, wp_admin_url); //curl_setopt($c, CURLOPT_MAXREDIRS, 10); curl_setopt($c, CURLOPT_HEADER, 0); //curl_setopt($c, CURLOPT_CRLF, 1); try { $result = curl_exec($c); } catch (Exception $e) { $result = 'error'; } curl_close ($c); return $result; //it return nothing (empty) } Other Facts curl_error($c); return nothing when header CURLOPT_HEADER is set to ON, it return this header HTTP/1.1 200 OK Cache-Control: no-cache, must-revalidate, max-age=0 Pragma: no-cache Content-Type: text/html; charset=UTF-8 Expires: Wed, 11 Jan 1984 05:00:00 GMT Last-Modified: Thu, 06 May 2010 21:06:30 GMT Server: Microsoft-IIS/7.0 X-Powered-By: PHP/5.2.13 Set-Cookie: wordpress_test_cookie=WP+Cookie+check; path=/wptc/sys/wp/ Set-Cookie: wordpress_b13661ceb5c3eba8b42d383be885d372=admin%7C1273352790%7C7d8ddfb6b1c0875c37c1805ab98f1e7b; path=/wptc/sys/wp/wp-content/plugins; httponly Set-Cookie: wordpress_b13661ceb5c3eba8b42d383be885d372=admin%7C1273352790%7C7d8ddfb6b1c0875c37c1805ab98f1e7b; path=/wptc/sys/wp/wp-admin; httponly Set-Cookie: wordpress_logged_in_b13661ceb5c3eba8b42d383be885d372=admin%7C1273352790%7Cb90825fb4a7d5da9b5dc4d99b4e06049; path=/wptc/sys/wp/; httponly Refresh: 0;url=http://myhost.com/wptc/sys/wp/wp-admin/ X-Powered-By: ASP.NET Date: Thu, 06 May 2010 21:06:30 GMT Content-Length: 0 CURL version info: Array ( [version_number] = 463872 [age] = 3 [features] = 2717 [ssl_version_number] = 0 [version] = 7.20.0 [host] = i386-pc-win32 [ssl_version] = OpenSSL/0.9.8k [libz_version] = 1.2.3 [protocols] = Array ( [0] = dict [1] = file [2] = ftp [3] = ftps [4] = http [5] = https [6] = imap [7] = imaps [8] = ldap [9] = pop3 [10] = pop3s [11] = rtsp [12] = smtp [13] = smtps [14] = telnet [15] = tftp ) ) PHP Version 5.2.13 Windows Server 2K3 IIS 7 Working fine on Apache, PHP 3.0 on my localhost (windows)

    Read the article

  • AT91SAM7X512's SPI peripheral gets disabled on write to SPI_TDR

    - by Dor
    My AT91SAM7X512's SPI peripheral gets disabled on the X time (X varies) that I write to SPI_TDR. As a result, the processor hangs on the while loop that checks the TDRE flag in SPI_SR. This while loop is located in the function SPI_Write() that belongs to the software package/library provided by ATMEL. The problem occurs arbitrarily - sometimes everything works OK and sometimes it fails on repeated attempts (attemp = downloading the same binary to the MCU and running the program). Configurations are (defined in the order of writing): SPI_MR: MSTR = 1 PS = 0 PCSDEC = 0 PCS = 0111 DLYBCS = 0 SPI_CSR[3]: CPOL = 0 NCPHA = 1 CSAAT = 0 BITS = 0000 SCBR = 20 DLYBS = 0 DLYBCT = 0 SPI_CR: SPIEN = 1 After setting the configurations, the code verifies that the SPI is enabled, by checking the SPIENS flag. I perform a transmission of bytes as follows: const short int dataSize = 5; // Filling array with random data unsigned char data[dataSize] = {0xA5, 0x34, 0x12, 0x00, 0xFF}; short int i = 0; volatile unsigned short dummyRead; SetCS3(); // NPCS3 == PIOA15 while(i-- < dataSize) { mySPI_Write(data[i]); while((AT91C_BASE_SPI0->SPI_SR & AT91C_SPI_TXEMPTY) == 0); dummyRead = SPI_Read(); // SPI_Read() from Atmel's library } ClearCS3(); /**********************************/ void mySPI_Write(unsigned char data) { while ((AT91C_BASE_SPI0->SPI_SR & AT91C_SPI_TXEMPTY) == 0); AT91C_BASE_SPI0->SPI_TDR = data; while ((AT91C_BASE_SPI0->SPI_SR & AT91C_SPI_TDRE) == 0); // <-- This is where // the processor hangs, because that the SPI peripheral is disabled // (SPIENS equals 0), which makes TDRE equal to 0 forever. } Questions: What's causing the SPI peripheral to become disabled on the write to SPI_TDR? Should I un-comment the line in SPI_Write() that reads the SPI_RDR register? Means, the 4th line in the following code: (The 4th line is originally marked as a comment) void SPI_Write(AT91S_SPI *spi, unsigned int npcs, unsigned short data) { // Discard contents of RDR register //volatile unsigned int discard = spi->SPI_RDR; /* Send data */ while ((spi->SPI_SR & AT91C_SPI_TXEMPTY) == 0); spi->SPI_TDR = data | SPI_PCS(npcs); while ((spi->SPI_SR & AT91C_SPI_TDRE) == 0); } Is there something wrong with the code above that transmits 5 bytes of data? Please note: The NPCS line num. 3 is a GPIO line (means, in PIO mode), and is not controlled by the SPI controller. I'm controlling this line by myself in the code, by de/asserting the ChipSelect#3 (NPCS3) pin when needed. The reason that I'm doing so is because that problems occurred while trying to let the SPI controller to control this pin. I didn't reset the SPI peripheral twice, because that the errata tells to reset it twice only if I perform a reset - which I don't do. Quoting the errata: If a software reset (SWRST in the SPI Control Register) is performed, the SPI may not work properly (the clock is enabled before the chip select.) Problem Fix/Workaround The SPI Control Register field, SWRST (Software Reset) needs to be written twice to be cor- rectly set. I noticed that sometimes, if I put a delay before the write to the SPI_TDR register (in SPI_Write()), then the code works perfectly and the communications succeeds. Useful links: AT91SAM7X Series Preliminary.pdf ATMEL software package/library spi.c from Atmel's library spi.h from Atmel's library An example of initializing the SPI and performing a transfer of 5 bytes is highly appreciated and helpful.

    Read the article

  • MySQL Query GROUP_CONCAT Over Multiple Rows

    - by PeteGO
    I'm getting name and address data out of generic question / answer data to create some kind of normalised reporting database. The query I've got uses group_concat and works for individual sets of questions but not for multiple sets. I've tried to simplify what I'm doing by using just forename and surname and just 3 records, 2 for 1 person and 1 for another. In reality though there are more than 300,000 records. Example of results with qs.Id = 1. QuestionSetId Forename Surname ------------------------------------------------------- 1 Bob Jones Example of results with qs.Id IN (1, 2, 3). QuestionSetId Forename Surname ------------------------------------------------------- 3 Bob,Bob,Frank Jones,Jones,Smith What I would like to see for qs.Id IN (1, 2, 3). QuestionSetId Forename Surname ------------------------------------------------------- 1 Bob Jones 2 Bob Jones 3 Frank Smith So how can I make the 2nd example return a separate row for each set of name and address information? I realise the current way the data is stored is "questionable" but I cannot change the way the data is stored. I can get sets of individual answers but not sure how to combine the others. My simplified Schema that I cannot change: CREATE TABLE StaticQuestion ( Id INT NOT NULL, StaticText VARCHAR(500) NOT NULL); CREATE TABLE Question ( Id INT NOT NULL, Text VARCHAR(500) NOT NULL); CREATE TABLE StaticQuestionQuestionLink ( Id INT NOT NULL, StaticQuestionId INT NOT NULL, QuestionId INT NOT NULL, DateEffective DATETIME NOT NULL); CREATE TABLE Answer ( Id INT NOT NULL, Text VARCHAR(500) NOT NULL); CREATE TABLE QuestionSet ( Id INT NOT NULL, DateEffective DATETIME NOT NULL); CREATE TABLE QuestionAnswerLink ( Id INT NOT NULL, QuestionSetId INT NOT NULL, QuestionId INT NOT NULL, AnswerId INT NOT NULL, StaticQuestionId INT NOT NULL); Some example data for only forename and surname. INSERT INTO StaticQuestion (Id, StaticText) VALUES (1, 'FirstName'), (2, 'LastName'); INSERT INTO Question (Id, Text) VALUES (1, 'What is your first name?'), (2, 'What is your forename?'), (3, 'What is your Surname?'); INSERT INTO StaticQuestionQuestionLink (Id, StaticQuestionId, QuestionId, DateEffective) VALUES (1, 1, 1, '2001-01-01'), (2, 1, 2, '2008-08-08'), (3, 2, 3, '2001-01-01'); INSERT INTO Answer (Id, Text) VALUES (1, 'Bob'), (2, 'Jones'), (3, 'Bob'), (4, 'Jones'), (5, 'Frank'), (6, 'Smith'); INSERT INTO QuestionSet (Id, DateEffective) VALUES (1, '2002-03-25'), (2, '2009-05-05'), (3, '2009-08-06'); INSERT INTO QuestionAnswerLink (Id, QuestionSetId, QuestionId, AnswerId, StaticQuestionId) VALUES (1, 1, 1, 1, 1), (2, 1, 3, 2, 2), (3, 2, 2, 3, 1), (4, 2, 3, 4, 2), (5, 3, 2, 5, 1), (6, 3, 3, 6, 2); Just in case SQLFiddle is down here are the 3 queries from the examples I've linked to: 1: - working query but only on 1 set of data. SELECT MAX(QuestionSetId) AS QuestionSetId, GROUP_CONCAT(Forename) AS Forename, GROUP_CONCAT(Surname) AS Surname FROM (SELECT x.QuestionSetId, CASE x.StaticQuestionId WHEN 1 THEN Text END AS Forename, CASE x.StaticQuestionId WHEN 2 THEN Text END AS Surname FROM (SELECT (SELECT link.StaticQuestionId FROM StaticQuestionQuestionLink link WHERE link.Id = qa.QuestionId AND link.DateEffective <= qs.DateEffective AND link.StaticQuestionId IN (1, 2) ORDER BY link.DateEffective DESC LIMIT 1) AS StaticQuestionId, a.Text, qa.QuestionSetId FROM QuestionSet qs INNER JOIN QuestionAnswerLink qa ON qs.Id = qa.QuestionSetId INNER JOIN Answer a ON qa.AnswerId = a.Id WHERE qs.Id IN (1)) x) y 2: - working query but undesired results on multiple sets of data. SELECT MAX(QuestionSetId) AS QuestionSetId, GROUP_CONCAT(Forename) AS Forename, GROUP_CONCAT(Surname) AS Surname FROM (SELECT x.QuestionSetId, CASE x.StaticQuestionId WHEN 1 THEN Text END AS Forename, CASE x.StaticQuestionId WHEN 2 THEN Text END AS Surname FROM (SELECT (SELECT link.StaticQuestionId FROM StaticQuestionQuestionLink link WHERE link.Id = qa.QuestionId AND link.DateEffective <= qs.DateEffective AND link.StaticQuestionId IN (1, 2) ORDER BY link.DateEffective DESC LIMIT 1) AS StaticQuestionId, a.Text, qa.QuestionSetId FROM QuestionSet qs INNER JOIN QuestionAnswerLink qa ON qs.Id = qa.QuestionSetId INNER JOIN Answer a ON qa.AnswerId = a.Id WHERE qs.Id IN (1, 2, 3)) x) y 3: - working query on multiple sets of data only on 1 field (answer) though. SELECT qs.Id AS QuestionSet, a.Text AS Answer FROM QuestionSet qs INNER JOIN QuestionAnswerLink qalink ON qs.Id = qalink.QuestionSetId INNER JOIN StaticQuestionQuestionLink sqqlink ON qalink.QuestionId = sqqlink.QuestionId INNER JOIN Answer a ON qalink.AnswerId = a.Id WHERE sqqlink.StaticQuestionId = 1 /* FirstName */ AND sqqlink.DateEffective = (SELECT DateEffective FROM StaticQuestionQuestionLink WHERE StaticQuestionId = 1 AND DateEffective <= qs.DateEffective ORDER BY DateEffective DESC LIMIT 1)

    Read the article

  • Run javascript after form submission in update panel?

    - by AverageJoe719
    This is driving me crazy! I have read at least 5 questions on here closely related to my problem, and probably 5 or so more pages just from googling. I just don't get it. I am trying to have a jqueryui dialog come up after a user fills out a form saying 'registration submitted' and then redirecting to another page, but I cannot for the life of me get any javascript to work, not even a single alert. Here is my update panel: <asp:ScriptManager ID="ScriptManager1" runat="server"> </asp:ScriptManager> <asp:UpdatePanel ID="upForm" runat="server" UpdateMode="Conditional" ChildrenAsTriggers="False"> <ContentTemplate> 'Rest of form' <asp:Button ID="btnSubmit" runat="server" Text="Submit" /> <p>Did register Pass? <%= registrationComplete %></p> </ContentTemplate> </asp:UpdatePanel> The Jquery I want to execute: (Right now this is sitting in the head of the markup, with autoOpen set to false) <script type="text/javascript"> function pageLoad() { $('#registerComplete').dialog({ autoOpen: true, width: 270, resizable: false, modal: true, draggable: false, buttons: { "Ok": function() { window.location.href = "someUrl"; } } }); } </script> Finally my code behind: ( Commented out all the things I've tried) Protected Sub btnSubmit_Click(ByVal sender As Object, ByVal e As EventArgs) Handles btnSubmit.Click 'Dim sbScript As New StringBuilder()' registrationComplete = True registrationUpdatePanel.Update() 'sbScript.Append("<script language='JavaScript' type='text/javascript'>" + ControlChars.Lf)' 'sbScript.Append("<!--" + ControlChars.Lf)' 'sbScript.Append("window.location.reload()" + ControlChars.Lf)' 'sbScript.Append("// -->" + ControlChars.Lf)' 'sbScript.Append("</")' 'sbScript.Append("script>" + ControlChars.Lf)' 'ScriptManager.RegisterClientScriptBlock(Me.Page, Me.GetType(), "AutoPostBack", sbScript.ToString(), False)' 'ClientScript.RegisterStartupScript("AutoPostBackScript", sbScript.ToString())' 'Response.Write("<script type='text/javascript'>alert('Test')</script>")' 'Response.Write("<script>windows.location.reload()</script>")' End Sub I've tried: Passing variables from server to client via inline <%= % in the javascript block of the head tag. Putting that same code in a script tag inside the updatePanel. Tried to use RegisterClientScriptBlock and RegisterStartUpScript Just doing a Response.Write with the script tag written in it. Tried various combinations of putting the entire jquery.dialog code in the registerstartup script, or just trying to change the autoOpen property, or just calling "open" on it. I can't even get a simple alert to work with any of these, so I am doing something wrong but I just don't know what it is. Here is what I know: The Jquery is binding properly even on async postbacks, because the div container that is the dialog box is always invisible, I saw a similiar post on here stating that was causing an issue, this isn't the case here. Using page_load instead of document.ready since that is supposed to run on both async and normal postbacks, so that isn't the issue. The update panel is updating correctly because <p>Did register Pass? <%= registrationComplete %></p> updates to true after I submit the form. So how can I make this work? All I want is - click submit button inside an update panel - run server side code to validate form and insert into db - if everything succeeded, have that jquery (modal) dialog pop up saying hey it worked.

    Read the article

  • ASP.NET MVC 2 Model encapsulated within ViewModel Validation

    - by Program.X
    I am trying to get validation to work in ASP.NET MVC 2, but without much success. I have a complex class containing a large number of fields. (Don't ask - this is oneo f those real-world situations best practices can't touch) This would normally be my Model and is a LINQ-to-SQL generated class. Because this is generated code, I have created a MetaData class as per http://davidhayden.com/blog/dave/archive/2009/08/10/AspNetMvc20BuddyClassesMetadataType.aspx. public class ConsultantRegistrationMetadata { [DisplayName("Title")] [Required(ErrorMessage = "Title is required")] [StringLength(10, ErrorMessage = "Title cannot contain more than 10 characters")] string Title { get; set; } [Required(ErrorMessage = "Forename(s) is required")] [StringLength(128, ErrorMessage = "Forename(s) cannot contain more than 128 characters")] [DisplayName("Forename(s)")] string Forenames { get; set; } // ... I've attached this to the partial class of my generated class: [MetadataType(typeof(ConsultantRegistrationMetadata))] public partial class ConsultantRegistration { // ... Because my form is complex, it has a number of dependencies, such as SelectLists, etc. which I have encapsulated in a ViewModel pattern - and included the ConsultantRegistration model as a property: public class ConsultantRegistrationFormViewModel { public Data.ConsultantRegistration ConsultantRegistration { get; private set; } public SelectList Titles { get; private set; } public SelectList Countries { get; private set; } // ... So it is essentially ViewModel=Model My View then has: <p> <%: Html.LabelFor(model => model.ConsultantRegistration.Title) %> <%: Html.DropDownListFor(model => model.ConsultantRegistration.Title, Model.Titles,"(select a Title)") %> <%: Html.ValidationMessage("Title","*") %> </p> <p> <%: Html.LabelFor(model => model.ConsultantRegistration.Forenames) %> <%: Html.TextBoxFor(model => model.ConsultantRegistration.Forenames) %> <%: Html.ValidationMessageFor(model=>model.ConsultantRegistration.Forenames) %> </p> The problem is, the validation attributes on the metadata class are having no effect. I tried doing it via an Interface, but also no effect. I'm beginning to think that the reason is because I am encapsulating my model within a ViewModel. My Controller (Create Action) is as follows: [HttpPost] public ActionResult Create(Data.ConsultantRegistration consultantRegistration) { if (ModelState.IsValid) // this is always true - which is wrong!! { try { consultantRegistration = ConsultantRegistrationRepository.SaveConsultantRegistration(consultantRegistration); return RedirectToAction("Edit", new { id = consultantRegistration.ID, sectionIndex = 2 }); } catch (Exception ex) { ModelState.AddModelError("CreateException",ex); } } return View(new ConsultantRegistrationFormViewModel(consultantRegistration)); } As outlined in the comment, the ModelState.IsValid property always returns true, despite fields with the Validaiton annotations not being valid. (Forenames being a key example). Am I missing something obvious - considering I am an MVC newbie? I'm after the mechanism demoed by Jon Galloway at http://www.asp.net/learn/mvc-videos/video-10082.aspx. (Am aware t is similar to http://stackoverflow.com/questions/1260562/asp-net-mvc-model-viewmodel-validation but that post seems to talk about xVal. I have no idea what that is and suspect it is for MVC 1)

    Read the article

  • Most simple way to do holiday calculation?

    - by brainfrog
    I want to make a little free calendar program to help me and others calculate how much time we have got left in a project. I mean real working time, not just time. Time in a raw form is not saying much. Typically when my boss tells me that I have time until 05-05-2011 it doesn't tell me really how much time I have to do my job. You know...so many things stop me from work: A) beeing at home, not at work (so called "free time" or "spare time"). That is in my case I work exactly 8 hours a day and then the cleaning ladies throw me out of the office with their incredible loud industrial vacuum cleaners every evening (my boss accepts that as an excuse to go home in time, regularly). B) weekends, or more precisely saturdays and sundays C) official holiday rescuing me from having to go to work. what I want to do is make a little utility which tells me how many working hours I really have in a given time period. The first two things A and B are pretty easy to implement. But the last thing C scares my pants off. Holidays. OOOHHH man. You know what that means. Chaos. Pure chaos. The huge question is: HOW TO CALCULATE HOLIDAYS?! Since I want my program to be useful for anyone anywhere in the world, I can't just hardcode all holidays for my little town. So which options do I have? I) I could hand-craft downloadable lists of holidays. Users search them within the application and download them from an webserver. Or I ship all of them in the package. But I would get very, very old if I tried that by myself for every country, state and town. II) I make an initial data sheet with holidays for my town, and don't care about the rest. However, I make that sheet with an how-to public, so that everyone who feels like beeing very nice can provide holiday data for his country / region / whatever. Those are made public on a webserver and everyone can get the data packages he/she needs for the app. III) ? I care a lot about usability. I don't want to make an ugly linux hack style hard to use app that only computer freaks can use. So you need to tell me more about holiday science. I was never really clever at this. I assume every single country in the world has it's own set of holidays. In every country there may be several states. For example the US has some, and Germany has also some states. Holidays vary from state to state. But I know from an good programmer he told me never assume anything. So the questions about holiday science are: Which categories do I need to make holiday-data-packs searchable? A guy from India should find quickly his holiday data pack, and a guy from Sillicon Valley should find his pack as equally fast. It makes most sense to me to filter for COUNTRY STATE WHATEVER. Like a drill-down-search. Did I miss something? What would be the best data format to hold holiday information? A holiday has a start and end date and a name. That should be enough. Would I put all this stuff in thousands of XML files? How would you go about this? Any hint / help is highly welcome! Thanks to everyone!

    Read the article

< Previous Page | 535 536 537 538 539 540 541 542 543 544 545 546  | Next Page >