Search Results

Search found 7294 results on 292 pages for 'out parameters'.

Page 233/292 | < Previous Page | 229 230 231 232 233 234 235 236 237 238 239 240  | Next Page >

  • Getting identity from Ado.Net Update command

    - by rboarman
    My scenario is simple. I am trying to persist a DataSet and have the identity column filled in so I can add child records. Here's what I've got so far: using (SqlConnection connection = new SqlConnection(connStr)) { SqlDataAdapter adapter = new SqlDataAdapter("select * from assets where 0 = 1", connection); adapter.MissingMappingAction = MissingMappingAction.Passthrough; adapter.MissingSchemaAction = MissingSchemaAction.AddWithKey; SqlCommandBuilder cb = new SqlCommandBuilder(adapter); var insertCmd = cb.GetInsertCommand(true); insertCmd.Connection = connection; connection.Open(); adapter.InsertCommand = insertCmd; adapter.InsertCommand.CommandText += "; set ? = SCOPE_IDENTITY()"; adapter.InsertCommand.UpdatedRowSource = UpdateRowSource.OutputParameters; var param = new SqlParameter("RowId", SqlDbType.Int); param.SourceColumn = "RowId"; param.Direction = ParameterDirection.Output; adapter.InsertCommand.Parameters.Add(param); SqlTransaction transaction = connection.BeginTransaction(); insertCmd.Transaction = transaction; try { assetsImported = adapter.Update(dataSet.Tables["Assets"]); transaction.Commit(); } catch (Exception ex) { transaction.Rollback(); // Log an error } connection.Close(); } The first thing that I noticed, besides the fact that the identity value is not making its way back into the DataSet, is that my change to add the scope_identity select statement to the insert command is not being executed. Looking at the query using Profiler, I do not see my addition to the insert command. Questions: 1) Why is my addition to the insert command not making its way to the sql being executed on the database? 2) Is there a simpler way to have my DataSet refreshed with the identity values of the inserted rows? 3) Should I use the OnRowUpdated callback to add my child records? My plan was to loop through the rows after the Update() call and add children as needed. Thank you in advance. Rick

    Read the article

  • "Exclusive" DirectDraw palette isn't actually exclusive

    - by CyberShadow
    We're maintaining an old video game that uses a full-screen 256-color graphics mode with DirectDraw. The problem is, some applications running in the background sometimes try to change the system palette while the game is running, which results in corrupted graphics. We can (sometimes) detect when this happens by processing the WM_PALETTECHANGED message. A few update versions ago we added logging (just log the window title/class/process name), which helped users identify offending applications and close them. MSN Live Messenger was a common culprit. The problem got worse when we found out that Windows Vista (and 7) does it "by itself". The WM_PALETTECHANGED parameters point towards CSRSS and the desktop window. In Vista, a workaround that often worked was to open any folder (Computer, Documents, etc.) and leave it open while running the game. Sounds ridiculous, but it worked - in most cases. In Windows 7, not even this workaround worked any more. Users found that stopping some services (Windows Update and the indexing service) also resolved the problem on some configurations. Some time ago I just started trying random things in hope of finding a solution. I found that setting the GDI palette (using Create/SelectPalette) before setting the DirectDraw palette (using IDirectDrawPalette::SetEntries) would restore the palette after it became corrupted (WM_PALETTECHANGED handler). SetSystemPaletteUse and calling SetPalette on the primary surface helped some more. However, there is still perceivable flickering when an application tries to steal the palette, which is especially prominent during fades. Question: is there a way to get a "real" exclusive palette, which completely disallows other applications changing the Windows palette as long as our game retains focus?

    Read the article

  • Misalignement in the output Bitmap created from a byte array

    - by Daniel
    I am trying to understand why I have troubles creating a Bitmap from a byte array. I post this after a careful scrutiny of the existing posts about Bitmap creation from byte arrays, like the followings: Creating a bitmap from a byte[], Working with Image and Bitmap in c#?, C#: Bitmap Creation using bytes array My code is aimed to execute a filter on a digital image 8bppIndexed writing the pixel value on a byte [] buffer to be converted again (after some processing to manage gray levels) in a 8BppIndexed Bitmap My input image is a trivial image created by means of specific perl code: https://www.box.com/shared/zqt46c4pcvmxhc92i7ct Of course, after executing the filter the output image has lost the first and last rows and the first and last columns, due to the way the filter manage borders, so from the original 256 x 256 image i get a 254 x 254 image. Just to stay focused on the issue I have commented the code responsible for executing the filter so that the operation really performed is an obvious: ComputedPixel = InputImage.GetPixel(myColumn, myRow).R; I know, i should use lock and unlock but I prefer one headache one by one. Anyway this code should be a sort of identity transform, and at last i use: private unsafe void FillOutputImage() { OutputImage = new Bitmap (OutputImageCols, OutputImageRows , PixelFormat .Format8bppIndexed); ColorPalette ncp = OutputImage.Palette; for (int i = 0; i < 256; i++) ncp.Entries[i] = Color .FromArgb(255, i, i, i); OutputImage.Palette = ncp; Rectangle area = new Rectangle(0, 0, OutputImageCols, OutputImageRows); var data = OutputImage.LockBits(area, ImageLockMode.WriteOnly, OutputImage.PixelFormat); Marshal .Copy (byteBuffer, 0, data.Scan0, byteBuffer.Length); OutputImage.UnlockBits(data); } The output image I get is the following: https://www.box.com/shared/p6tubyi6dsf7cyregg9e It is quite clear that I am losing a pixel per row, but i cannot understand why: I have carefully controlled all the parameters: OutputImageCols, OutputImageRows and the byte [] byteBuffer length and content even writing known values as way to test. The code is nearly identical to other code posted in stackOverflow and elsewhere. Someone maybe could help to identify where the problem is? Thanks a lot

    Read the article

  • Optimizing C# code in MVC controller

    - by cc0
    I am making a number of distinct controllers, one relating to each stored procedure in a database. These are only used to read data and making them available in JSON format for javascripts. My code so far looks like this, and I'm wondering if I have missed any opportunities to re-use code, maybe make some help classes. I have way too little experience doing OOP, so any help and suggestions here would be really appreciated. Here is my generalized code so far (tested and works); using System; using System.Configuration; using System.Web.Mvc; using System.Data; using System.Text; using System.Data.SqlClient; using Prototype.Models; namespace Prototype.Controllers { public class NameOfStoredProcedureController : Controller { char[] lastComma = { ',' }; String oldChar = "\""; String newChar = "&quot;"; StringBuilder json = new StringBuilder(); private String strCon = ConfigurationManager.ConnectionStrings["SomeConnectionString"].ConnectionString; private SqlConnection con; public StoredProcedureController() { con = new SqlConnection(strCon); } public string do_NameOfStoredProcedure(int parameter) { con.Open(); using (SqlCommand cmd = new SqlCommand("NameOfStoredProcedure", con)) { cmd.CommandType = CommandType.StoredProcedure; cmd.Parameters.AddWithValue("@parameter", parameter); using (SqlDataReader reader = cmd.ExecuteReader()) { while (reader.Read()) { json.AppendFormat("[{0},\"{1}\"],", reader["column1"], reader["column2"]); } } con.Close(); } if (json.Length.ToString().Equals("0")) { return "[]"; } else { return "[" + json.ToString().TrimEnd(lastComma) + "]"; } } //http://host.com/NameOfStoredProcedure?parameter=value public ActionResult Index(int parameter) { return new ContentResult { ContentType = "application/json", Content = do_NameOfStoredProcedure(parameter) }; } } }

    Read the article

  • curl: downloading from dynamic url

    - by adam n
    I'm trying to download an html file with curl in bash. Like this site: http://www.registrar.ucla.edu/schedule/detselect.aspx?termsel=10S&subareasel=PHYSICS&idxcrs=0001B+++ When I download it manually, it works fine. However, when i try and run my script through crontab, the output html file is very small and just says "Object moved to here." with a broken link. Does this have something to do with the sparse environment the crontab commands run it? I found this question: http://stackoverflow.com/questions/1279340/php-ssl-curl-object-moved-error but i'm using bash, not php. What are the equivalent command line options or variables to set to fix this problem in bash? (I want to do this with curl, not wget) Edit: well, sometimes downloading the file manually (via interactive shell) works, but sometimes it doesn't (I still get the "Object moved here" message). So it may not be a a specifically be a problem with cron's environment, but with curl itself. the cron entry: * * * * * ~/.class/test.sh >> ~/.class/test_out 2>&1 test.sh: #! /bin/bash PATH=/usr/local/bin:/usr/bin:/bin:/sbin cd ~/.class course="physics 1b" url="http://www.registrar.ucla.edu/schedule/detselect.aspx?termsel=10S<URL>subareasel=PHYSICS<URL>idxcrs=0001B+++" curl "$url" -sLo "$course".html --max-redirs 5 As I was searching around on google, someone suggested that the problem might happen because there are parameters in the url. (Because it is a dynamic url?)

    Read the article

  • Cannot edit values of DataGridView bound to a BindingList

    - by m_oLogin
    Hello community, I have trouble editing a databound bindinglist. Let me illustrate it with the following: Say I have the Person class: public Class Person{ private string m_firstname; private string m_lastname; public string FirstName{get;set;} public string LastName{get;set;} public Person{ ... } } I then have a containing class called Population: public class Population{ private BindingList<Person> m_lstPerson = new BindingList<Person>(); private string m_countryName; public BindingList<Person> ListPerson{get; set;} public string CountryName { get; set; } } I then have on one form a first datagridview with DataSource = m_lstPopulation (BindingList). The binding works like a charm when working with the Population objects. When I double click, it opens up a dialog form showing the object details. One tab in the details holds a datagridview bound to that population's ListPerson. The second datagridview displays fine. However, I cannot edit or add cells in this datagridview. None of the columns is set to read-only. In fact, both datagridview have just about the same parameters. What am I missing? It seems that a lock has been placed on the Population object so that its inner fields cannot be edited... Please advise. Thanks.

    Read the article

  • VS2010 : javascript intellisense : specifying properties for 'options' objects passed to methods

    - by Master Morality
    Since javascript intellisense actually seems to work in VS2010, I thought I might add some to those scripts I include in almost everything. The trouble is, on some complex functions, I use option objects instead of passing umpteen different parameters, like so: function myFunc(options){ var myVar1 = options.myVar1, myVar2 = options.myVar2, myVar3 = options.myVar3; ... } the trouble I am running into is, there doesn't seem to be a way to specify what properties options needs to have. I've tried this: function myFunc(options){ ///<summary>my func does stuff...</summary> ///<param name="options"> ///myVar1 : the first var ///myVar2 : the second var ///myVar3 : the third var ///</param> var myVar1 = options.myVar1, myVar2 = options.myVar2, myVar3 = options.myVar3; ... } but the line breaks are removed and all the property comments run together, making them stupidly hard to read. I've tried the <para> tags, but to no avail. If anyone has any ideas on how I might achieve this, please let me know. -Brandon

    Read the article

  • 2 different routes on one page?

    - by Dejan.S
    Hi I'm pretty new with MVC2 or MVC in general. If it's one thing I get caught up with it's routes. Like now I got this scenario. Im going from the regular site to Admin. My navigation is the same partialview on both I just do a check which data to render something like this. <% if (!Request.RawUrl.Contains("Admin")){%> <% foreach (var site in Model) { %> <%= Html.MenuItem(site.BelongSite, "Sida", "Site", site.BelongSite) %> | <%} %> <%} else {%> <%= Html.ActionLink("Konfig", "Konfigurera", "Admin") %> <% } %> My route looks like this routes.MapRoute( "Admin", // Route name "Admin/{action}/{name}", // URL with parameters new { controller = "Admin", action = "konfigurera", name = UrlParameter.Optional } // Parameter defaults ); On my View called Konfigurera I got Edit sites and they use the route above and it works great. The navigation tho dont get no action assigned to it. It's just <a href='Admin/'> The navigation is in the shared folder, and it is a strongly typed. Any Ideas? I been struggling with this for about a hour now Thanks for any input

    Read the article

  • How to create a JAX-RS service where the sub-resource @Path doesn't have a leading slash

    - by Matt
    I've created a JAX-RS service (MyService) that has a number of sub resources, each of which is a subclass of MySubResource. The sub resource class being chosen is picked based on the parameters given in the MyService path, for example: @Path("/") @Provides({"text/html", "text/xml"}) public class MyResource { @Path("people/{id}") public MySubResource getPeople(@PathParam("id") String id) { return new MyPeopleSubResource(id); } @Path("places/{id}") public MySubResource getPlaces(@PathParam("id") String id) { return new MyPlacesSubResource(id); } } where MyPlacesSubResource and MyPeopleSubResource are both sub-classes of MySubResource. MySubResource is defined as: public abstract class MySubResource { protected abstract Results getResults(); @GET public Results get() { return getResults(); } @GET @Path("xml") public Response getXml() { return Response.ok(getResults(), MediaType.TEXT_XML_TYPE).build(); } @GET @Path("html") public Response getHtml() { return Response.ok(getResults(), MediaType.TEXT_HTML_TYPE).build(); } } Results is processed by corresponding MessageBodyWriters depending on the mimetype of the response. While this works it results in paths like /people/Bob/html or /people/Bob/xml where what I really want is /people/Bob.html or /people/Bob.xml Does anybody know how to accomplish what I want to do?

    Read the article

  • SO what RDF database do i use for product attribute situation initially i thought about using EAV?

    - by keisimone
    Hi, i have a similar issue as espoused in http://stackoverflow.com/questions/695752/product-table-many-kinds-of-product-each-product-has-many-parameters i am convinced to use RDF now. only because of one of the comments made by Bill Karwin in the answer to the above issue but i already have a database in mysql and the code is in php. 1) So what RDF database should I use? 2) do i combine the approach? meaning i have a class table inheritance in the mysql database and just the weird product attributes in the RDF? I dont think i should move everything to a RDF database since it is only just products and the wide array of possible attributes and value that is giving me the problem. 3) what php resources, articles should i look at that will help me better in the creation of this? 4) more articles or resources that helps me to better understand RDF in the context of the above challenge of building something that will better hold all sorts of products' attributes and values will be greatly appreciated. i tend to work better when i have a conceptual understanding of what is going on. Do bear in mind i am a complete novice to this and my knowledge of programming and database is average at best.

    Read the article

  • Conditional macro expansion

    - by Dave DeLong
    Heads up: This is a weird question. I've got some really useful macros that I like to use to simplify some logging. For example I can do Log(@"My message with arguments: %@, %@, %@", @"arg1", @"arg2", @"arg3"), and that will get expanded into a more complex method invocation that includes things like self, _cmd, __FILE__, __LINE__, etc, so that I can easily track where things are getting logged. This works great. Now I'd like to expand my macros to not only work with Objective-C methods, but general C functions. The problem is the self and _cmd portions that are in the macro expansion. These two parameters don't exist in C functions. Ideally, I'd like to be able to use this same set of macros within C functions, but I'm running into problems. When I use (for example) my Log() macro, I get compiler warnings about self and _cmd being undeclared (which makes total sense). My first thought was to do something like the following (in my macro): if (thisFunctionIsACFunction) { DoLogging(nil, nil, format, ##__VA_ARGS__); } else { DoLogging(self, _cmd, format, ##__VA_ARGS__); } This still produces compiler warnings, since the entire if() statement is substituted in place of the macro, resulting in errors with the self and _cmd keywords (even though they will never be executed during function execution). My next thought was to do something like this (in my macro): if (thisFunctionIsACFunction) { #define SELF nil #define CMD nil } else { #define SELF self #define CMD _cmd } DoLogging(SELF, CMD, format, ##__VA_ARGS__); That doesn't work, unfortunately. I get "error: '#' is not followed by a macro parameter" on my first #define. My other thought was to create a second set of macros, specifically for use in C functions. This reeks of a bad code smell, and I really don't want to do this. Is there some way I can use the same set of macros from within both Objective-C methods and C functions, and only reference self and _cmd if the macro is in an Objective-C method?

    Read the article

  • Multiple threads modifying a collection in Java??

    - by posdef
    Hi, The project I am working on requires a whole bunch of queries towards a database. In principle there are two types of queries I am using: read from excel file, check for a couple of parameters and do a query for hits in the database. These hits are then registered as a series of custom classes. Any hit may (and most likely will) occur more than once so this part of the code checks and updates the occurrence in a custom list implementation that extends ArrayList. for each hit found, do a detail query and parse the output, so that the classes created in (I) get detailed info. I figured I would use multiple threads to optimize time-wise. However I can't really come up with a good way to solve the problem that occurs with the collection these items are stored in. To elaborate a little bit; throughout the execution objects are supposed to be modified by both (I) and (II). I deliberately didn't c/p any code, as it would be big chunks of code to make any sense.. I hope it make some sense with the description above. Thanks,

    Read the article

  • C# How to perform a live xslt transformation on an in memory object?

    - by JL
    I have a function that takes 2 parameters : 1 = XML file, 2 = XSLT file, then performs a transformation and returns the resulting HTML. Here is the function: /// <summary> /// Will apply an XSLT style to any XML file and return the rendered HTML. /// </summary> /// <param name="xmlFileName"> /// The file name of the XML document. /// </param> /// <param name="xslFileName"> /// The file name of the XSL document. /// </param> /// <returns> /// The rendered HTML. /// </returns> public string TransformXml(string xmlFileName, string xslFileName) { var xtr = new XmlTextReader(xmlFileName) { WhitespaceHandling = WhitespaceHandling.None }; var xd = new XmlDocument(); xd.Load(xtr); var xslt = new System.Xml.Xsl.XslCompiledTransform(); xslt.Load(xslFileName); var stm = new MemoryStream(); xslt.Transform(xd, null, stm); stm.Position = 1; var sr = new StreamReader(stm); xtr.Close(); return sr.ReadToEnd(); } I want to change the function not to accept a file for the XML, but instead just an object. The object is exactly compatible with the xslt, if it was serialized to file. But I don't want to have to serialize it to a file first. So to recap : keep the xslt coming from a file, but the xml input should an object I pass and would like to generate the xml from without any file system interaction.

    Read the article

  • Saving HttpResponse/Request to file system

    - by chrisjlong
    Here is my scenario. User fills out this large page which is dynamically created based off DB values. Those values can change. When the user fills out the page and hits submit we want to save a copy of the page as html on the server, this way if the text or wording changes, when they go back to view their posted information, it is historically accurate. So I basically need to do this protected void buttonSave_Click(object sender, EventArgs e) { //collect information into an object to save it in the db bool result = BusinessLogic.Save(myBusinessObject); if (result) //!!! Here is where I need to save this page as an html file on my servers IFS!!!! else //whatever Response.Redirect("~/SomeOtherPage.aspx"); } Any help is greatly apprciated. Also I CANNOT just request the data from the url because query string parameters are a big no no in this case. The key to pull the database info up (at its highest level) is all in session so I cant just request a url and save it. Thanks!

    Read the article

  • Use Hudson Build Parameter in Grails Build Target

    - by Stephen Swensen
    I have created two Hudson String Parameters in my parametrized build configuration: svnRoot, and svnBranch. I can reference these just fine when specifying my Repository URL: ${svnRoot}/${svnBranch}/subProject. But I have not been able to reference them as part of my Grails Build Target: "build-applet ${svnRoot}/${svnBranch}/appletProject username password" "war --non-interactive". build-applet invokes a Gant script in the Grails project at scripts\BuildApplet.groovy. This attempt yields the following error: groovy.lang.MissingPropertyException: No such property: svnRoot for class: Script1 at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.unwrap(ScriptBytecodeAdapter.java:49) at org.codehaus.groovy.runtime.callsite.PogoGetPropertySite.getProperty(PogoGetPropertySite.java:49) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callGroovyObjectGetProperty(AbstractCallSite.java:240) at Script1.run(Script1.groovy:1) at groovy.lang.GroovyShell.evaluate(GroovyShell.java:561) at groovy.lang.GroovyShell.evaluate(GroovyShell.java:536) at com.g2one.hudson.grails.GrailsBuilder.evalTarget(GrailsBuilder.java:212) at com.g2one.hudson.grails.GrailsBuilder.perform(GrailsBuilder.java:168) at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:19) at hudson.model.AbstractBuild$AbstractRunner.perform(AbstractBuild.java:603) at hudson.model.Build$RunnerImpl.build(Build.java:172) at hudson.model.Build$RunnerImpl.doRun(Build.java:137) at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:417) at hudson.model.Run.run(Run.java:1337) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:140) What is the best and or easiest way to achieve my goal?

    Read the article

  • Some jQuery-powered features not working in Chrome

    - by Enchantner
    I'm using a jCarouselLite plugin for creating two image galleries on the main page of my Django-powered site. The code of elements with navigation arrows is generating dynamically like this: $(document).ready(function () { $('[jq\\:corner]').each(function(index, item) { item = $(item); item.corner(item.attr('jq:corner')) }) $('[jq\\:menu]').each(function (index, item) { item = $(item); item.menu(item.attr('jq:menu')) }) $('[jq\\:carousel]').each(function(index, item) { item = $(item); var args = item.attr('jq:carousel').split(/\s+/) lister = item.parent().attr('class') + '_lister' item.parent().append('<div id="'+ lister +'"></div>'); $('#' + lister).append("<a class='nav left' href='#'></a><a class='nav right' href='#'></a>"); toparrow = $(item).position().top + $(item).height() - 50; widtharrow = $(item).width(); $('#' + lister).css({ 'display': 'inline-block', 'z-index': 10, 'position': 'absolute', 'margin-left': '-22px', 'top': toparrow, 'width': widtharrow }) $('#' + lister + ' .nav.right').css({ 'margin-left': $('#' + lister).width() + 22 }) item.jCarouselLite({ btnNext: '#' + lister + ' .nav.right', btnPrev: '#' + lister + ' .nav.left', visible: parseInt(args[0]) }) }) The problem is that if page is loaded through an url, typed in the adress bar, some functions doesn't work and the second gallery appears with the wrong parameters, but if I came to this page via clicking link - everything works perfectly. It happends only in Google Chrome (Ubuntu, stable 5.0.360.0), but not in Firefox or Opera. What could be the reason?

    Read the article

  • Deserializing JSON in WCF throws xml errors in .Net 4.0

    - by Syg
    Hi there. I'm going slidely mad over here, maybe someone else can figure out what's going on here. I have a WCF service exposing a function using webinvoke, like so: [OperationContract] [WebInvoke(Method = "POST", BodyStyle = WebMessageBodyStyle.Wrapped, RequestFormat = WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json, UriTemplate = "registertokenpost" )] void RegisterDeviceTokenForYoumiePost(test token); The datacontract for the test class looks like this: [DataContract(Namespace="Zooma.Test", Name="test", IsReference=true)] public class test { string waarde; [DataMember(Name="waarde", Order=0)] public string Waarde { get { return waarde; } set { waarde = value; } } } When sending the following json message to the service, { "test": { "waarde": "bla" } } the trace log gives me errors (below). I have tried this with just a string instead of the datatype (void RegisterDeviceTokenForYoumiePost(string token); ) but i get the same error. All help is appreciated, can't figure it out. It looks like it's creating invalid xml from the json message, but i'm not doing any custom serialization here. The formatter threw an exception while trying to deserialize the message: Error in deserializing body of request message for operation 'RegisterDeviceTokenForYoumiePost'. Unexpected end of file. **Following elements are not closed**: waarde, test, root.</Message><StackTrace> at System.ServiceModel.Dispatcher.OperationFormatter.DeserializeRequest(Message message, Object[] parameters)

    Read the article

  • Implementing Model-level caching

    - by Byron
    I was posting some comments in a related question about MVC caching and some questions about actual implementation came up. How does one implement a Model-level cache that works transparently without the developer needing to manually cache, yet still remains efficient? I would keep my caching responsibilities firmly within the model. It is none of the controller's or view's business where the model is getting data. All they care about is that when data is requested, data is provided - this is how the MVC paradigm is supposed to work. (Source: Post by Jarrod) The reason I am skeptical is because caching should usually not be done unless there is a real need, and shouldn't be done for things like search results. So somehow the Model itself has to know whether or not the SELECT statement being issued to it worthy of being cached. Wouldn't the Model have to be astronomically smart, and/or store statistics of what is being most often queried over a long period of time in order to accurately make a decision? And wouldn't the overhead of all this make the caching useless anyway? Also, how would you uniquely identify a query from another query (or more accurately, a resultset from another resultset)? What about if you're using prepared statements, with only the parameters changing according to user input? Another poster said this: I would suggest using the md5 hash of your query combined with a serialized version of your input arguments. This would require twice the number of serialization options. I was under the impression that serialization was quite expensive, and for large inputs this might be even worse than just re-querying. And is the minuscule chance of collision worth worrying about? Conceptually, caching in the Model seems like a good idea to me, but it seems in practicality the developer should have direct control over caching and write it into the controller. Thoughts/ideas? Edit: I'm using PHP and MySQL if that helps to narrow your focus.

    Read the article

  • Create x509 certificate with openssl/makecert tool

    - by Zé Carlos
    I'm creating a x509 certificate using makecert with the following parameters: makecert -r -pe -n "CN=Client" -ss MyApp I want to use this certificate to encrypt and decrypt data with RSA algoritm. I look to generated certificate in windows certificate store and everything seems ok (It has a private key, public key is a RSA key with 1024 bits and so on..) Now i use this C# code to encrypt data: X509Store store = new X509Store("MyApp", StoreLocation.CurrentUser); store.Open(OpenFlags.ReadOnly); X509Certificate2Collection certs = store.Certificates.Find(X509FindType.FindBySubjectName, "Client", false); X509Certificate2 _x509 = certs[0]; using (RSACryptoServiceProvider rsa = (RSACryptoServiceProvider)_x509.PrivateKey) { byte[] dataToEncrypt = Encoding.UTF8.GetBytes("hello"); _encryptedData = rsa.Encrypt(dataToEncrypt, true); } When executing the Encrypt method, i receive a CryptographicException with message "Bad key". I think the code is fine. Probably i'm not creating the certificate properly. Any comments? Thanks ---------------- EDIT -------------- If anyone know how to create the certificate using OpenSsl, its also a valid answer for me.

    Read the article

  • How do I apply a "template" or "skeleton" of code in C# here?

    - by Scott Stafford
    In my business layer, I need many, many methods that follow the pattern: public BusinessClass PropertyName { get { if (this.m_LocallyCachedValue == null) { if (this.Record == null) { this.m_LocallyCachedValue = new BusinessClass( this.Database, this.PropertyId); } else { this.m_LocallyCachedValue = new BusinessClass( this.Database, this.Record.ForeignKeyName); } } return this.m_LocallyCachedValue; } } I am still learning C#, and I'm trying to figure out the best way to write this pattern once and add methods to each business layer class that follow this pattern with the proper types and variable names substituted. BusinessClass is a typename that must be substituted, and PropertyName, PropertyId, ForeignKeyName, and m_LocallyCachedValue are all variables that should be substituted for. Are attributes usable here? Do I need reflection? How do I write the skeleton I provided in one place and then just write a line or two containing the substitution parameters and get the pattern to propagate itself? EDIT: Modified my misleading title -- I am hoping to find a solution that doesn't involve code generation or copy/paste techniques, and rather to be able to write the skeleton of the code once in a base class in some form and have it be "instantiated" into lots of subclasses as the accessor for various properties. EDIT: Here is my solution, as suggested but left unimplemented by the chosen answerer. // I'll write many of these... public BusinessClass PropertyName { get { return GetSingleRelation(ref this.m_LocallyCachedValue, this.PropertyId, "ForeignKeyName"); } } // That all call this. public TBusinessClass GetSingleRelation<TBusinessClass>( ref TBusinessClass cachedField, int fieldId, string contextFieldName) { if (cachedField == null) { if (this.Record == null) { ConstructorInfo ci = typeof(TBusinessClass).GetConstructor( new Type[] { this.Database.GetType(), typeof(int) }); cachedField = (TBusinessClass)ci.Invoke( new object[] { this.Database, fieldId }); } else { var obj = this.Record.GetType().GetProperty(objName).GetValue( this.Record, null); ConstructorInfo ci = typeof(TBusinessClass).GetConstructor( new Type[] { this.Database.GetType(), obj.GetType()}); cachedField = (TBusinessClass)ci.Invoke( new object[] { this.Database, obj }); } } return cachedField; }

    Read the article

  • does lucene search function work in large size document?

    - by shaon-fan
    Hi,there I have a problem when do search with lucene. First, in lucene indexing function, it works well to huge size document. such as .pst file, the outlook mail storage. It can build indexing file include all the information of .pst. The only problem is to large sometimes, include very much words. So when i search using lucene, it only can process the front part of this indexing file, if one word come out the back part of the indexing file, it couldn't find this word and no hits in result. But when i separate this indexing file to several parts in stupid way when debugging, and searching every parts, it can work well. So i want to know how to separate indexing file, how much size should be the limit of searching? cheers and wait 4 reply. ++++++++++++++++++++++++++++++++++++++++++++++++++ hi,there, follow Coady siad, i set the length to max 2^31-1. But the search result still can't include what i want. simply, i convert the doc word to string array[] to analyze, one doc word has 79680 words include the space and any symbol. when i search certain word, it just return 300 count, actually it has more than 300 results. The same reason, when i search a word in back part of the doc, it also couldn't find. //////////////set the length idexwriter.SetMaxFieldLength(2147483647); ////////////////////search IndexSearcher searcher = new ndexSearcher(Program.Parameters["INDEX_LOCATION"].ToString()); Hits hits = searcher.Search(query); This is my code, as others same. I found that problem when i need to count every word hits in a doc. So i also found it couldn't search word in back part of doc. pls help me to find, is there any set searcher length somewhere? how u meet this problem.

    Read the article

  • Dynamic memory inside a struct

    - by Maximilien
    Hello, I'm editing a piece of code, that is part of a big project, that uses "const's" to initialize a bunch of arrays. Because I want to parametrize these const's I have to adapt the code to use "malloc" in order to allocate the memory. Unfortunately there is a problem with structs: I'm not able to allocate dynamic memory in the struct itself. Doing it outside would cause to much modification of the original code. Here's a small example: int globalx,globaly; struct bigStruct{ struct subStruct{ double info1; double info2; bool valid; }; double data; //subStruct bar[globalx][globaly]; subStruct ** bar=(subStruct**)malloc(globalx*sizeof(subStruct*)); for(int i=0;i<globalx;i++) bar[i]=(*subStruct)malloc(globaly*sizeof(subStruct)); }; int main(){ globalx=2; globaly=3; bigStruct foo; for(int i=0;i<globalx;i++) for(int j=0;j<globaly;j++){ foo.bar[i][j].info1=i+j; foo.bar[i][j].info2=i*j; foo.bar[i][j].valid=(i==j); } return 0; } Note: in the program code I'm editing globalx and globaly were const's in a specified namespace. Now I removed the "const" so they can act as parameters that are set exactly once. Summarized: How can I properly allocate memory for the substruct inside the struct? Thank you very much! Max

    Read the article

  • how can I solve a problem with andWhere with symfony/doctrine and odbc?

    - by JaSk
    While following the symfony tutorial (1.4.4) I'm getting an error with ODBC/mssql 2008. SQLSTATE[07002]: COUNT field incorrect: 0 [Microsoft][SQL Server Native Client 10.0]COUNT field incorrect or syntax error (SQLExecute[0] at ext\pdo_odbc\odbc_stmt.c:254). Failing Query: "SELECT [j].[id] AS [j__id], [j].[category_id] AS [j__category_id], [j].[type] AS [j_type], [j].[company] AS [j_company], [j].[logo] AS [j_logo], [j].[url] AS [j_url], [j].[position] AS [j_position], [j].[location] AS [j_location], [j].[description] AS [j__description], [j].[how_to_apply] AS [j__how_to_apply], [j].[token] AS [j__token], [j].[is_public] AS [j__is_public], [j].[is_activated] AS [j__is_activated], [j].[email] AS [j__email], [j].[expires_at] AS [j__expires_at], [j].[created_at] AS [j__created_at], [j].[updated_at] AS [j__updated_at] FROM [jobeet_job] [j] WHERE ([j].[category_id] = '2' AND [j].[expires_at] ?) ORDER BY [j].[expires_at] DESC" I've narrowed the problem to a line that uses parameters public function getActiveJobs(Doctrine_Query $q = null) { if (is_null($q)) { $q = Doctrine_Query::create() -from('JobeetJob j'); } //$q->andWhere('j.expires_at > \''.date('Y-m-d H:i:s', time()).'\'');<-- this works $q->andWhere('j.expires_at > ?', date('Y-m-d H:i:s', time())); //<-- this line has problem $q->addOrderBy('j.expires_at DESC'); return $q->execute(); } can anyone point me in the right direction? Thanks.

    Read the article

  • Adaptive user interface/environment algorithm

    - by WowtaH
    Hi all, I'm working on an information system (in C#) that (while my users use it) gathers statistical data on what pieces of information (tables & records) each user is requesting the most, and what parts of the interface he/she uses most. I'm using this statistical data to make the application adaptive to the user's needs, both in the way the interface presents itself (eg: tab/pane-ordering) as in the way of using the frequently viewed information to (eg:) show higher in search results/suggestion-lists. What i'm looking for is an algorithm/formula to determine the current 'hotness'/relevance of these objects for a specific user. A simple 'hitcounter' for each object won't be sufficient because the user might view some information quite frequently for a period of time, and then moving on to the next, making the old information less relevant. So i think my algorithm also needs some sort of sliding/historical principle to account for the changing popularity of the objects in the application over time. So, the question is: Does anybody have some sort of algorithm that accounts for that 'popularity over time' ? Preferably with some explanation on the parameters :) Thanks! PS I've looked at other posts like http://stackoverflow.com/questions/32397/popularity-algorithm but i could't quite port it to my specific case. Any help is appreciated.

    Read the article

  • Android 3.1+ USB as virtual COM port

    - by ZachMc
    I have a third party usb device, that when plugged into a Windows machine, is recognized as a serial device and assigned to the COM 4 port. I can communicate with the device just like I would with a device connected via a serial port. For instance, I can write "abc" serially to the device via the USB connection. I have been searching for a way to do a similar thing in Android. If I try the Usb Host method, and use a UsbManager to open the UsbDevice, I can get one interface, with 2 endpoints. I have tried sending control messages using the method in UsbDeviceConnection, but the method returns -1 for everything (though I don't know what I should use for the parameters of that method). Is there a way to get an OutputStream that I can write to that will send bytes to the USB device? Right now I am looking at recompiling the kernel to include a virtual COM port driver and write some native code to be able to do this. Thanks! Edit: I am using the FTDI serial to USB converter circuit. Is this compatible with Android?

    Read the article

< Previous Page | 229 230 231 232 233 234 235 236 237 238 239 240  | Next Page >