Search Results

Search found 14644 results on 586 pages for 'auto generate'.

Page 494/586 | < Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >

  • When to Store Temporary Values in Hidden Field vs. Session vs. Database?

    - by viatropos
    I am trying to build a simple OpenID login panel similar to how Stack Overflow's works. The goal is: User clicks OpenID/Oauth provider OpenID/Oauth stuff happens, we end up with the result (already made that) Then we want to confirm that the user wants to actually create a new account (vs. associating account with another OpenID account). In StackOverflow, they keep a hidden field on a form that looks like this: <form action="/users/openidconfirm" method="post"> <p>This is an OpenID we haven't seen on Stack Overflow before:</p> <p class="openid-identifier">https://me.yahoo.com/a/some-hash</p> <p>Do you want to associate this OpenID with your Stack Overflow account?</p> <div> <input type="hidden" name="fkey" value="9792ab2zza1q2a4ac414casdfa137eafba7"> <input type="hidden" name="s" value="c1a3q133-11fa-49r0-a7bz-da19849383218"> <input type="submit" value="Associate OpenID"> <input type="button" value="Cancel" onclick="window.location.href = 'http://stackoverflow.com/users/169992/viatropos?s=c1a3q133-11fa-49r0-a7bz-da19849383218'"> </div> </form> Initial question is, what are those hashes fkey and s? Not that I really care what these specific hashes are, but what it seems like is happening is they have processed the openid response and saved it to the DB in a temporary object or something, and from there they generate these keys, because they don't look like Oauth keys to me. Main situation is: after I have processed OpenID/Oauth responses, I don't yet want to create a new user/account until the user submits the "confirm" form. Should I store the keys and tokens temporarily in a "Confirm" form like this? Or is there a better way? It seems that using a temp database object would be a lot of work to manage properly. Thanks for the help. Lance

    Read the article

  • drawing to a JPanel without inheritance

    - by g.rocket
    Right now I'm working on a program that throws up a bunch of separate (generated at runtime) images, each in their own window. To do this i've tried this approach: public void display(){ JFrame window = new JFrame("NetPart"); JPanel canvas = new JPanel(); window.getContentPane().add(canvas); Graphics g = canvas.getGraphics(); Dimension d = getSize(); System.out.println(d); draw(g,new Point(d.minX*50,d.maxY*50), 50); window.setSize(d.size(50)); window.setResizable(false); window.setDefaultCloseOperation(JFrame.HIDE_ON_CLOSE); window.setVisible(true); } public void draw(Graphics g, Point startLoc, int scale){ // generate and draw the image } public Dimension getSize(){ //returns my own dimensions class } However, this throws a NullPointerException in draw, claiming that the graphics is null. is there any way to externally draw to a JPanel from outside it (not inherit from JPanel and override PaintComponent)? Any help would be appreciated.

    Read the article

  • Different EF Property DataType than Storage Layer Possible?

    - by dj_kyron
    Hi, I am putting together a WCF Data Service for PatientEntities using Entity Framework. My solution needs to address these requirements: Property DateOfBirth of entity Patient is stored in SQL Server as string. It would be ideal if the entity class did not also use the "string" type but rather a DateTime type. (I would expect this to be possible since we're abstracting away from the storage layer). Where could a conversion mechanism be put in place that would convert to and from DateTime/string so that the entity and SQL Server are in sync?. I cannot change the storage layer's structure, so I have to work around it. WCF Data Services (Read-only, so no need for saving changes) need to be used since clients will be able to use LINQ expressions to consume the service. They can generate results based on any given query scenario they need and not be constrained by a single method such as GetPatient(int ID). I've tried to use DTOs, but run into problem of mapping the ObjectContext to a DTO, I don't think that is theoretically possible...or too complicated if it is. I've tried to use Self Tracking Entities but they require the metadata from the .edmx file if I'm correct, and this isn't allowing a different property data type. I also want to add customizations to my Entity getter methods so that a property "MRN" of type "string" needs to have .Replace("MR~", string.Empty) performed before it is returned. I can add this to the getter methods but the problem with that is Entity Framework will overwrite that next time it refreshes the entity classes. Is there a permanent place I can put these? Should I use POCO instead? How would that work with WCF Data Services? Where would the service grab the metadata?

    Read the article

  • Improving the performance of XSL

    - by Rachel
    I am using the below XSL 2.0 code to find the ids of the text nodes that contains the list of indices that i give as input. the code works perfectly but in terms for performance it is taking a long time for huge files. Even for huge files if the index values are small then the result is quick in few ms. I am using saxon9he Java processor to execute the XSL. <xsl:variable name="insert-data" as="element(data)*"> <xsl:for-each-group select="doc($insert-file)/insert-data/data" group-by="xsd:integer(@index)"> <xsl:sort select="current-grouping-key()"/> <data index="{current-grouping-key()}" text-id="{generate-id( $main-root/descendant::text()[ sum((preceding::text(), .)/string-length(.)) ge current-grouping-key() ][1] )}"> <xsl:copy-of select="current-group()/node()"/> </data> </xsl:for-each-group> </xsl:variable> In the above solution if the index value is too huge say 270962 then the time taken for the XSL to execute is 83427ms. In huge files if the index value is huge say 4605415, 4605431 it takes several minutes to execute. Seems the computation of the variable "insert-data" takes time though it is a global variable and computed only once. Should the XSL be addessed or the processor? How can i improve the performance of the XSL.

    Read the article

  • Creating Mobile Cross-Platform Scripting Solution

    - by Aplomb
    I am having the dream to design a Mobile cross-platform scripting solution to achieve Developer only need to code once by scripting language(it's possible be Javascript or others need further investigation), then the solution will be able to generate the installation files for multiple mobile platforms like J2me, Android, Symbian, BlackBerry, Palm, Windows Mobile/Phone, iPhone, etc. Using the scripting language, developer can code with unified platform API. And other extension frameworks under scripting language, like, 1. 2D UI framework Most probably using for Mobile Applications, the UI style will compete with native UI framework and no longer every application in particular platform looks similar. 2. 3D UI framework Provide the platforms who has 3D capability can represent their Mobile Application with more fashion 3D style which will be much more abstractive. 3. Variable game engine Developers can can use it for easily and high-quality build games. 4. etc This Mobile solution is providing three points, 1. Cross-Platform 2. Scripting 3. Middleware So guys, what do you think about this idea, is it good for Developers? is it profitable? where is the better direction for this idea?

    Read the article

  • How to genrate a monochrome bit mask for a 32bit bitmap

    - by Mordachai
    Under Win32, it is a common technique to generate a monochrome bitmask from a bitmap for transparency use by doing the following: SetBkColor(hdcSource, clrTransparency); VERIFY(BitBlt(hdcMask, 0, 0, bm.bmWidth, bm.bmHeight, hdcSource, 0, 0, SRCCOPY)); This assumes that hdcSource is a memory DC holding the source image, and hdcMask is a memory DC holding a monochrome bitmap of the same size (so both are 32x32, but the source is 4 bit color, while the target is 1bit monochrome). However, this seems to fail for me when the source is 32 bit color + alpha. Instead of getting a monochrome bitmap in hdcMask, I get a mask that is all black. No bits get set to white (1). Whereas this works for the 4bit color source. My search-foo is failing, as I cannot seem to find any references to this particular problem. I have isolated that this is indeed the issue in my code: i.e. if I use a source bitmap that is 16 color (4bit), it works; if I use a 32 bit image, it produces the all-black mask. Is there an alternate method I should be using in the case of 32 bit color images? Is there an issue with the alpha channel that overrides the normal behavior of the above technique? Thanks for any help you may have to offer! ADDENDUM: I am still unable to find a technique that creates a valid monochrome bitmap for my GDI+ produced source bitmap. I have somewhat alleviated my particular issue by simply not generating a monochrome bitmask at all, and instead I'm using TransparentBlt(), which seems to get it right (but I don't know what they're doing internally that's any different that allows them to correctly mask the image). It might be useful to have a really good, working function: HBITMAP CreateTransparencyMask(HDC hdc, HBITMAP hSource, COLORREF crTransparency); Where it always creates a valid transparency mask, regardless of the color depth of hSource. Ideas?

    Read the article

  • Generating easy-to-remember random identifiers

    - by Carl Seleborg
    Hi all, As all developers do, we constantly deal with some kind of identifiers as part of our daily work. Most of the time, it's about bugs or support tickets. Our software, upon detecting a bug, creates a package that has a name formatted from a timestamp and a version number, which is a cheap way of creating reasonably unique identifiers to avoid mixing packages up. Example: "Bug Report 20101214 174856 6.4b2". My brain just isn't that good at remembering numbers. What I would love to have is a simple way of generating alpha-numeric identifiers that are easy to remember. Examples would be "azil3", "ulmops", "fel2way", etc. I just made these up, but they are much easier to recognize when you see many of them at once. I know of algorithms that perform trigram analysis on text (say you feed them a whole book in German) and that can generate strings that look and feel like German words. This requires lots of data, though, and makes it slightly less suitable for embedding in an application just for this purpose. Do you know of anything else? Thanks! Carl

    Read the article

  • Why doesn't this simple regex match what I think it should?

    - by Kevin Stargel
    I have a data file that looks like the following example. I've added '%' in lieu of \t, the tab control character. 1234:56% Alice Worthington alicew% Jan 1, 2010 10:20:30 AM% Closed% Development Digg: Reddit: Update%% file-one.txt% 1.1% c:/foo/bar/quux Add%% file-two.txt% 2.5.2% c:/foo/bar/quux Remove%% file-three.txt% 3.4% c:/bar/quux Update%% file-four.txt% 4.6.5.3% c:/zzz ... many more records of the above form The records I'm interested in are the lines beginning with "Update", "Add", "Remove", and so on. I won't know what the lines begin with ahead of time, or how many lines precede them. I do know that they always begin with a string of letters followed by two tabs. So I wrote this regex: generate-report-for 1234:56 | egrep "^[[:alpha:]]+\t\t.+" But this matches zero lines. Where did I go wrong? Edit: I get the same results whether I use '...' or "..." for the egrep expression, so I'm not sure it's a shell thing.

    Read the article

  • What is the form_for syntax for nested resources?

    - by Kris
    I am trying to create a form for a nested resource. Here is my route: map.resources :websites do |website| website.resources :domains end Here are my attempts and the errors: <% form_for(@domain, :url => website_domains_path(@website)) do | form | %> <%= form.text_field :name %> # ArgumentError: wrong number of arguments (1 for 0) # form_helper.rb:290:in 'respond_to?' # form_helper.rb:290:in 'apply_form_for_options!' # form_helper.rb:277:in 'form_for' <% form_for([@website, @domain]) do | form | %> <%= form.text_field :name %> # ArgumentError: wrong number of arguments (1 for 0) # form_helper.rb:290:in 'respond_to?' # form_helper.rb:290:in 'apply_form_for_options!' # form_helper.rb:277:in 'form_for' <% form_for(:domain, @domain, :url => website_domains_path(@website)) do | form | %> <%= form.text_field :name %> # ArgumentError: wrong number of arguments (1 for 0) # wrapper.rb:14:in 'respond_to?' # wrapper.rb:14:in 'wrap' # active_record_helper.rb:174:in 'error_messages_for' <% form_for(:domain, [@website, @domain]) do | form | %> <%= form.text_field :name %> # UndefinedMethodError 'name' for #<Array:0x40fa498> I have confirmed both @website and @domain contain instances of the correct class. The routes also generate correctly is used like this for example, so I dont think their is an issue with the route or url helpers. <%= website_domains_path(1) %> <%= website_data_source_path(1, 1) %> Rails 2.3.5

    Read the article

  • Find the Algorithm that generates the checksum

    - by knivmannen
    I have a sensing device that transmits a 6-byte message along with an 1-byte counter and supposely a checksum. The data looks something like this: ------DATA----------- -Counter- --Checksum?-- 55 FF 00 00 EC FF ---- 60---------- 1F The last four bits in the counter are always set 0, i.e those bits are probably not used. The last byte is assumed to be the checksum since it has a quite peculiar nature. It tends to randomly change as data changes. Now what i need is to find the algorithm to compute this checksum based on --DATA--. what i have tried is all possible CRC-8 polynomials, for each polynomial i have tried to reflect data, toggle it, initiate it with non-zeroes etc etc. Ive come to the conclusion that i am not dealing with a normal crc-algorithm. I have also tried some flether and adler methods without succes, xor stuff back and forth but still i have no clue how to generate the checksum. My biggest concern is, how is the counter used??? Same data but with different countervalue generates different checksums. I have tried to include the counter in my computations but without any luck. Here are some other datasamples: 55 FF 00 00 F0 FF A0 38 66 0B EA FF BF FF C0 CA 5E 18 EA FF B7 FF 60 BD F6 30 16 00 FC FE 10 81 One more thing that might be worth mentioning is that the last byte in the data only takes on the values FF or FE Plz if u have any tips or tricks that i may try post them here, I am truly desperate. Thx

    Read the article

  • Maximum number of characters using keystrokes A, Ctrl+A, Ctrl+C and Ctrl+V

    - by munda
    This is an interview question from google. I am not able to solve it by myself. Can somebody shed some light? Write a program to print the sequence of keystrokes such that it generates the maximum number of character 'A's. You are allowed to use only 4 keys: A, Ctrl+A, Ctrl+C and Ctrl+V. Only N keystrokes are allowed. All Ctrl+ characters are considered as one keystroke, so Ctrl+A is one keystroke. For example, the sequence A, Ctrl+A, Ctrl+C, Ctrl+V generates two A's in 4 keystrokes. Ctrl+A is Select All Ctrl+C is Copy Ctrl+V is Paste I did some mathematics. For any N, using x numbers of A's , one Ctrl+A, one Ctrl+C and y Ctrl+V, we can generate max ((N-1)/2)2 number of A's. For some N M, it is better to use as many Ctrl+A's, Ctrl+C and Ctrl+V sequences as it doubles the number of A's. The sequence Ctrl+A, Ctrl+V, Ctrl+C will not overwrite the existing selection. It will append the copied selection to selected one.

    Read the article

  • WebService Stubs WSDL Axis2

    - by tt0686
    Good afternoon in my timezone. I have a wsdl file with the following snippet of code: <schema targetNamespace="http://util.cgd.pt" xmlns="http://www.w3.org/2001/XMLSchema"> <import namespace="http://schemas.xmlsoap.org/soap/encoding/" /> <complexType name="Attribute"> <sequence> <element name="fieldType" nillable="true" type="xsd:string" /> <element name="dataType" nillable="true" type="xsd:string" /> <element name="name" nillable="true" type="xsd:string" /> <element name="value" nillable="true" type="xsd:string" /> </sequence> </complexType> <complexType name="ArrayOffAttribute"> <complexContent> <restriction base="soapenc:Array"> <attribute ref="soapenc:arrayType" wsdl:arrayType="tns3:Attribute[]" /> </restriction> </complexContent> </complexType> When i use jax-rpc or Axis1 to generate the stubs the type Attribute is generated, but when i use Axis2 the type Attribute is not generated and a new type is created ArrayOffAttribute, this new type extends the axis2.databinding.types.soapencoding.Array and permits to add elements through the array.addObject(object), my question is, i am migrating one Java EE application from webservices using jax-rpc to start using Axis2, and the methods are using the Attribute type to fullfill attributes fields, now in Axis2 and do not have attribute type , what should i use in the ArrayOffAttribute.addObject(?) ? Could be something wrong with Axis2 ? i am stop here :( Thanks in advance Best regards

    Read the article

  • Reflecting over classes in .NET produces methods only differing by a modifier

    - by mrjoltcola
    I'm a bit boggled by something, I hope the CLR gearheads can help. Apparently my gears aren't big enough. I have a reflector utility that generates assembly stubs for Cola for .NET, and I find classes have methods that only differ by a modifier, such as virtual. Example below, from Oracle.DataAccess.dll, method GetType(): class OracleTypeException : System.SystemException { virtual string ToString (); virtual System.Exception GetBaseException (); virtual void set_Source (string value); virtual void GetObjectData (System.Runtime.Serialization.SerializationInfo info, System.Runtime.Serialization.StreamingContext context); virtual System.Type GetType (); // here virtual bool Equals (object obj); virtual int32 GetHashCode (); System.Type GetType (); // and here } What is this? I have not been able to reproduce this with C# and it causes trouble for Cola as it thinks GetType() is a redefinition, since the signature is identical. My method reflector starts like this: static void DisplayMethod(MethodInfo m) { if ( // Filter out things Cola cannot yet import, like generics, pointers, etc. m.IsGenericMethodDefinition || m.ContainsGenericParameters || m.ReturnType.IsGenericType || !m.ReturnType.IsPublic || m.ReturnType.IsArray || m.ReturnType.IsPointer || m.ReturnType.IsByRef || m.ReturnType.IsPointer || m.ReturnType.IsMarshalByRef || m.ReturnType.IsImport ) return; // generate stub signature // [snipped] }

    Read the article

  • Passing IDisposable objects through constructor chains

    - by Matt Enright
    I've got a small hierarchy of objects that in general gets constructed from data in a Stream, but for some particular subclasses, can be synthesized from a simpler argument list. In chaining the constructors from the subclasses, I'm running into an issue with ensuring the disposal of the synthesized stream that the base class constructor needs. Its not escaped me that the use of IDisposable objects this way is possibly just dirty pool (plz advise?) for reasons I've not considered, but, this issue aside, it seems fairly straightforward (and good encapsulation). Codes: abstract class Node { protected Node (Stream raw) { // calculate/generate some base class properties } } class FilesystemNode : Node { public FilesystemNode (FileStream fs) : base (fs) { // all good here; disposing of fs not our responsibility } } class CompositeNode : Node { public CompositeNode (IEnumerable some_stuff) : base (GenerateRaw (some_stuff)) { // rogue stream from GenerateRaw now loose in the wild! } static Stream GenerateRaw (IEnumerable some_stuff) { var content = new MemoryStream (); // molest elements of some_stuff into proper format, write to stream content.Seek (0, SeekOrigin.Begin); return content; } } I realize that not disposing of a MemoryStream is not exactly a world-stopping case of bad CLR citizenship, but it still gives me the heebie-jeebies (not to mention that I may not always be using a MemoryStream for other subtypes). It's not in scope, so I can't explicitly Dispose () it later in the constructor, and adding a using statement in GenerateRaw () is self-defeating since I need the stream returned. Is there a better way to do this? Preemptive strikes: yes, the properties calculated in the Node constructor should be part of the base class, and should not be calculated by (or accessible in) the subclasses I won't require that a stream be passed into CompositeNode (its format should be irrelevant to the caller) The previous iteration had the value calculation in the base class as a separate protected method, which I then just called at the end of each subtype constructor, moved the body of GenerateRaw () into a using statement in the body of the CompositeNode constructor. But the repetition of requiring that call for each constructor and not being able to guarantee that it be run for every subtype ever (a Node is not a Node, semantically, without these properties initialized) gave me heebie-jeebies far worse than the (potential) resource leak here does.

    Read the article

  • Access 2007 DAO VBA Error 3381 causes objects in calling methods to "break".

    - by MT
    ---AFTER FURTHER INVESTIGATION--- "tblABC" in the below example must be a linked table (to another Access database). If "tblABC" is in the same database as the code then the problem does not occur. Hi, We have recently upgraded to Office 2007. We have a method in which we have an open recordset (DAO). We then call another sub (UpdatingSub below) that executes SQL. This method has its own error handler. If error 3381 is encountered then the recordset in the calling method becomes "unset" and we get error 3420 'Object invalid or no longer set'. Other errors in UpdatingSub do not cause the same problem. This code works fine in Access 2003. Private Sub Whatonearth() Dim rs As dao.Recordset set rs = CurrentDb.OpenRecordset("tblLinkedABC") Debug.Print rs.RecordCount UpdatingSub "ALTER TABLE tblTest DROP Column ColumnNotThere" 'Error 3240 occurs on the below line even though err 3381 is trapped in the calling procedure 'This appears to be because error 3381 is encountered when calling UpdatingSub above Debug.Print rs.RecordCount End Sub Private Sub WhatonearthThatWorks() Dim rs As dao.Recordset set rs = CurrentDb.OpenRecordset("tblLinkedABC") Debug.Print rs.RecordCount 'Change the update to generate a different error UpdatingSub "NONSENSE SQL STATEMENT" 'Error is trapped in UpdatingSub. Next line works fine. Debug.Print rs.RecordCount End Sub Private Sub UpdatingSub(strSQL As String) On Error GoTo ErrHandler: CurrentDb.Execute strSQL ErrHandler: 'LogError' End Sub Any thoughts? We are running Office Access 2007 (12.0.6211.1000) SP1 MSO (12.0.6425.1000). Perhaps see if SP2 can be distributed? Sorry about formatting - not sure how to fix that.

    Read the article

  • Database Functional Programming in Clojure

    - by Ralph
    "It is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail." - Abraham Maslow I need to write a tool to dump a large hierarchical (SQL) database to XML. The hierarchy consists of a Person table with subsidiary Address, Phone, etc. tables. I have to dump thousands of rows, so I would like to do so incrementally and not keep the whole XML file in memory. I would like to isolate non-pure function code to a small portion of the application. I am thinking that this might be a good opportunity to explore FP and concurrency in Clojure. I can also show the benefits of immutable data and multi-core utilization to my skeptical co-workers. I'm not sure how the overall architecture of the application should be. I am thinking that I can use an impure function to retrieve the database rows and return a lazy sequence that can then be processed by a pure function that returns an XML fragment. For each Person row, I can create a Future and have several processed in parallel (the output order does not matter). As each Person is processed, the task will retrieve the appropriate rows from the Address, Phone, etc. tables and generate the nested XML. I can use a a generic function to process most of the tables, relying on database meta-data to get the column information, with special functions for the few tables that need custom processing. These functions could be listed in a map(table name -> function). Am I going about this in the right way? I can easily fall back to doing it in OO using Java, but that would be no fun. BTW, are there any good books on FP patterns or architecture? I have several good books on Clojure, Scala, and F#, but although each covers the language well, none look at the "big picture" of function programming design.

    Read the article

  • Qt and variadic functions

    - by Noah Roberts
    OK, before lecturing me on the use of C-style variadic functions in C++...everything else has turned out to require nothing short of rewriting the Qt MOC. What I'd like to know is whether or not you can have a "slot" in a Qt object that takes an arbitrary amount/type of arguments. The thing is that I really want to be able to generate Qt objects that have slots of an arbitrary signature. Since the MOC is incompatible with standard preprocessing and with templates, it's not possible to do so with either direct approach. I just came up with another idea: struct funky_base : QObject { Q_OBJECT funky_base(QObject * o = 0); public slots: virtual void the_slot(...) = 0; }; If this is possible then, because you can make a template that is a subclass of a QObject derived object so long as you don't declare new Qt stuff in it, I should be able to implement a derived templated type that takes the ... stuff and turns it into the appropriate, expected types. If it is, how would I connect to it? Would this work? connect(x, SIGNAL(someSignal(int)), y, SLOT(the_slot(...))); If nobody's tried anything this insane and doesn't know off hand, yes I'll eventually try it myself...but I am hoping someone already has existing knowledge I can tap before possibly wasting my time on it.

    Read the article

  • Transmission and verification of certificate (openssl) with socket in c

    - by allenzzzxd
    Hello, guys, I have to write these codes in c. I have already generate the certificate of one terminate t1: t1.pem, which is generated by openssl. The communication between the terminates t1 and t2 has been established via socket in c. Now I want to send this certificate to another terminate t2.and I want t2 to receive the certificate, verify it and answer with an acceptance to t1. When t1 get this acceptance, it will the rest of stuffs.. But I don't know how to do these things. For example, I transmit t1.pem as a string? But in t2 side, how can I do to verify? I know there are functions in openssl to do so, but I'm not so clear about it. At last, normally, the acceptance should be like how? @[email protected] lot of questions here.. sorry...if someone could give me some guide.. Thanks a lot in advance!

    Read the article

  • How do you save and retrieve a Key/IV pair securely?

    - by Shawn Steward
    I'm using VB.Net's RijndaelManaged (RM) to encrypt files, using the RM.GenerateKey and RM.GenerateIV methods to generate the Key and IV and encrypting the file using the CryptoStream class. I'm planning on saving this Key and IV to a file and want to make sure I'm doing it the right way. I am combining the IV+Key, and encrypting that with my RSA Public key and writing it out to a file. Then, to decrypt I use the RSA Private key on this file to get the IV+Key, split them up and set RM.Key and RM.IV to these values and run the decryptor. Is this the best method to accomplish this, or is there a preferred method for saving the IV & Key? Also, what's the best way to construct and deconstruct the byte array? I used the .Concat method to join them together and that seems to work well but I can't seem to find something as easy to deconstruct it. I played with the .Take method that takes the first x # of bytes and it works for the first part but can't find anything that gets the rest of it.

    Read the article

  • .NET Web Service hydrate custom class

    - by row1
    I am consuming an external C# Web Service method which returns a simple calculation result object like this: [Serializable] public class CalculationResult { public string Name { get; set; } public string Unit { get; set; } public decimal? Value { get; set; } } When I add a Web Reference to this service in my ASP .NET project Visual Studio is kind enough to generate a matching class so I can easily consume and work with it. I am using Castle Windsor and I may want to plug in other method of getting a calculation result object, so I want a common class CalculationResult (or ICalculationResult) in my solution which all my objects can work with, this will always match the object returned from the external Web Service 1:1. Is there anyway I can tell my Web Service client to hydrate a particular class instead of its generated one? I would rather not do it manually: foreach(var fromService in calcuationResultsFromService) { ICalculationResult calculationResult = new CalculationResult() { Name = fromService.Name }; yield return calculationResult; } Edit: I am happy to use a Service Reference type instead of the older Web Reference.

    Read the article

  • Display a ranking grid for game : optimization of left outer join and find a player

    - by Jerome C.
    Hello, I want to do a ranking grid. I have a table with different values indexed by a key: Table SimpleValue : key varchar, value int, playerId int I have a player which have several SimpleValue. Table Player: id int, nickname varchar Now imagine these records: SimpleValue: Key value playerId for 1 1 int 2 1 agi 2 1 lvl 5 1 for 6 2 int 3 2 agi 1 2 lvl 4 2 Player: id nickname 1 Bob 2 John I want to display a rank of these players on various SimpleValue. Something like: nickname for lvl Bob 1 5 John 6 4 For the moment I generate an sql query based on which SimpleValue key you want to display and on which SimpleValue key you want to order players. eg: I want to display 'lvl' and 'for' of each player and order them on the 'lvl' The generated query is: SELECT p.nickname as nickname, v1.value as lvl, v2.value as for FROM Player p LEFT OUTER JOIN SimpleValue v1 ON p.id=v1.playerId and v1.key = 'lvl' LEFT OUTER JOIN SimpleValue v2 ON p.id=v2.playerId and v2.key = 'for' ORDER BY v1.value This query runs perfectly. BUT if I want to display 10 different values, it generates 10 'left outer join'. Is there a way to simplify this query ? I've got a second question: Is there a way to display a portion of this ranking. Imagine I've 1000 players and I want to display TOP 10, I use the LIMIT keyword. Now I want to display the rank of the player Bob which is 326/1000 and I want to display 5 rank player above and below (so from 321 to 331 position). How can I achieve it ? thanks.

    Read the article

  • SQL Server: What locale should be used to format numeric values into SQL Server format?

    - by Ian Boyd
    It seems that SQL Server does not accept numbers formatted using any particular locale. It also doesn't support locales that have digits other than 0-9. For example, if the current locale is bengali, then the number 123456789 would come out as "?????????". And that's just the digits, nevermind what the digit grouping would be. But the same problem happens for numbers in the Invariant locale, which formats numbers as "123,456,789", which SQL Server won't accept. Is there a culture that matches what SQL Server accepts for numeric values? Or will i have to create some custom "sql server" culture, generating rules for that culture myself from lower level formatting routines? If i was in .NET (which i'm not), i could peruse the Standard Numeric Format strings. Of the format codes available in .NET: c (Currency): $123.46 d (Decimal): 1234 e (Exponentional): 1.052033E+003 f (Fixed Point): 1234.57 g (General): 123.456 n (Number): 1,234.57 p (Percent): 100.00 % r (Round Trip): 123456789.12345678 x (Hexadecimal): FF Only 6 accept all numeric types: c (Currency): $123.46 d (Decimal): 1234 e (Exponentional): 1.052033E+003 f (Fixed Point): 1234.57 g (General): 123.456 n (Number): 1,234.57 p (Percent): 100.00 % r (Round Trip): 123456789.12345678 x (Hexadecimal): FF And of those only 2 generate string representations, in the en-US locale anyway, that would be accepted by SQL Server: c (Currency): $123.46 d (Decimal): 1234 e (Exponentional): 1.052033E+003 f (Fixed Point): 1234.57 g (General): 123.456 n (Number): 1,234.57 p (Percent): 100.00 % r (Round Trip): 123456789.12345678 x (Hexadecimal): FF Of the remaining two, fixed is dependant on the locale's digits, rather than the number being used, leaving General g format: c (Currency): $123.46 d (Decimal): 1234 e (Exponentional): 1.052033E+003 f (Fixed Point): 1234.57 g (General): 123.456 n (Number): 1,234.57 p (Percent): 100.00 % r (Round Trip): 123456789.12345678 x (Hexadecimal): FF And i can't even say for certain that the g format won't add digit groupings (e.g. 1,234). Is there a locale that formats numbers in the way SQL Server expects? Is there a .NET format code? A java format code? A Delphi format code? A VB format code? A stdio format code? latin-numeral-digits

    Read the article

  • problem loading texture with transparency with OpenGL ES and Android

    - by Evan Kimia
    Im trying to load an image that has background transparency that will be layered over another texture. When i try and load it, all i get is a white screen. The texture is 512 by 512, and its saved in photoshop as a 24 bit PNG (standard PNG specs in the Photoshop Save for Web and Devices config window). Any idea why its not showing? The texture without transparency shows without a problem. Here is my loadTextures method: public void loadGLTexture(GL10 gl, Context context) { //Get the texture from the Android resource directory Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(), R.drawable.m1); Bitmap normalScheduleLines = BitmapFactory.decodeResource(context.getResources(), R.drawable.m1n); //Generate texture pointers... gl.glGenTextures(3, textures, 0); //...and bind it to our array gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[1]); //Create Nearest Filtered Texture gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR_MIPMAP_NEAREST); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL11.GL_TEXTURE_2D, GL11.GL_GENERATE_MIPMAP, GL11.GL_TRUE); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_CLAMP_TO_EDGE); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_CLAMP_TO_EDGE); GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0); bitmap.recycle(); //Bind our normal schedule bus map lines gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]); //Create Nearest Filtered Texture gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR_MIPMAP_NEAREST); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL11.GL_TEXTURE_2D, GL11.GL_GENERATE_MIPMAP, GL11.GL_TRUE); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_CLAMP_TO_EDGE); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_CLAMP_TO_EDGE); GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, GL10.GL_RGBA, normalScheduleLines, 0); normalScheduleLines.recycle(); }

    Read the article

  • Rails - CSV export: prompt for file download

    - by Pierre
    Hello, I want to give my users the ability to export a table to CSV. So in my controller, I've added on top of the file: respond_to :html, :js, :csv I'm also setting the headers if the requested format is csv: if params[:format] == 'csv' generate_csv_headers("negotiations-#{Time.now.strftime("%Y%m%d")}") end Code for generate_csv_headers(in application_controller) is: def generate_csv_headers(filename) headers.merge!({ 'Cache-Control' => 'must-revalidate, post-check=0, pre-check=0', 'Content-Type' => 'text/csv', 'Content-Disposition' => "attachment; filename=\"#{filename}\"", 'Content-Transfer-Encoding' => 'binary' }) end I've also created a view named index.csv.erb to generate my file: <%- headers = ["Id", "Name"] -%> <%= CSV.generate_line headers %> <%- @negotiations.each do |n| -%> <%- row = [ n.id, n.name ] -%> <%= CSV.generate_line row %> <%- end -%> I don't have any error, but it simply displays the content of the CSV file, while I'd expect a prompt from the browser to download the file. I've read a lot, but could not find anything that'd work. Do you have an idea? thanks, p.

    Read the article

  • symfony doctrine build-sql error

    - by user313571
    I have some big problems with symfony and doctrine at the beginning of a new project. I have created database diagram with mysql workbench, inserted the sql into phpmyadmin and then I've tried symfony doctrine:build-schema to generate the YAML schema. It generates a wrong schema (relations don't have on delete/on update) and after this I've tried symfony doctrine:build --sql and symfony doctrine:insert-sql The insert-sql statement generates error (can't create table ... failing query alter table add constraint ....), so I've decided to take a look over the generated sql and I've found out some differences between the sql generated from mysql workbench (which works perfect, including relations) and the sql generated by doctrine. I'll be short from now: I have to tables, EVENT and FORM and a 1 to n relation (each event may have multiple forms) so the correct constraint (generated with workbench) is ALTER TABLE `form` ADD CONSTRAINT `fk_form_event1` FOREIGN KEY (`event_id`) REFERENCES `event` (`id`) ON DELETE CASCADE ON UPDATE CASCADE; doctrine generated statement is: ALTER TABLE event ADD CONSTRAINT event_id_form_event_id FOREIGN KEY (id) REFERENCES form(event_id); It's totally reversed and I am sure here is the error. What should I do? It's also correct like this?

    Read the article

< Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >