Search Results

Search found 19480 results on 780 pages for 'do your own homework'.

Page 632/780 | < Previous Page | 628 629 630 631 632 633 634 635 636 637 638 639  | Next Page >

  • Set fields with instrospection - Problem with String.valueOf(String)

    - by fabb
    Hey there! I'm setting public fields of the Object 'this' via reflection. Both the field name and the value are given as String. I use several various field types: Boolean, Integer, Float, Double, an own enum, and a String. It works with all of them except with a String. The exception that gets thrown is that no method with the Signature String.valueOf(String) exists... Now I use a dirty instanceof workaround to detect if each field is a String and in that case just copy the value to the field. private void setField(String field, String value) throws Exception { Field wField = this.getClass().getField(field); if(wField.get(this) instanceof String){ //TODO dirrrrty hack //stupid workaround as java.lang.String.valueOf(java.lang.String) fails... wField.set(this, value); }else{ Method parseMethod = wField.getType().getMethod("valueOf", new Class[]{String.class}); wField.set(this, parseMethod.invoke(wField, value)); } } Any ideas how to avoid that workaround? Do you think java.lang.String should support the method valueOf(String)? thanks, fabb

    Read the article

  • How do I place another attribute to a MKAnnotation?

    - by kevin Mendoza
    for my app each annotation on a map corresponds to a mine locality. each mine has its own unique 7 digit integer identifier. I'm trying to add the property minesEntryNumber to the annotation so when the annotation is clicked on later I can bring up specific information on the selected annotation. This is part of my code: for (id mine in mines) { //NSLog(@"in the loop"); workingCoordinate.latitude = [[mine latitudeInitial] doubleValue]; workingCoordinate.longitude = [[mine longitudeInitial] doubleValue]; iProspectAnnotation *tempMine = [[iProspectAnnotation alloc] initWithCoordinate:workingCoordinate]; [tempMine setTitle:[mine mineName]]; tempMine.minesEntryNumber = [mine entryNumber]; //other code for dealing with mine types and adding the annotation to the mapview } the code works fine without the "tempMine.minesEntryNumber = [mine entryNumber];" part. It loads the map and shows the annotations. however when I try and put this in it brings up an error. So how do I add this property to each annotation and how do I access it later in a different .m file?

    Read the article

  • Python - Open default mail client using mailto, with multiple recipients

    - by victorhooi
    Hi, I'm attempting to write a Python function to send an email to a list of users, using the default installed mail client. I want to open the email client, and give the user the opportunity to edit the list of users or the email body. I did some searching, and according to here: http://www.sightspecific.com/~mosh/WWW_FAQ/multrec.html It's apparently against the RFC spec to put multiple comma-delimited recipients in a mailto link. However, that's the way everybody else seems to be doing it. What exactly is the modern stance on this? Anyhow, I found the following two sites: http://2ality.blogspot.com/2009/02/generate-emails-with-mailto-urls-and.html http://www.megasolutions.net/python/invoke-users-standard-mail-client-64348.aspx which seem to suggest solutions using urllib.parse (url.parse.quote for me), and webbrowser.open. I tried the sample code from the first link (2ality.blogspot.com), and that worked fine, and opened my default mail client. However, when I try to use the code in my own module, it seems to open up my default browser, for some weird reason. No funny text in the address bar, it just opens up the browser. The email_incorrect_phone_numbers() function is in the Employees class, which contains a dictionary (employee_dict) of Employee objects, which themselves have a number of employee attributes (sn, givenName, mail etc.). Full code is actually here (http://stackoverflow.com/questions/2963975/python-converting-csv-to-objects-code-design) from urllib.parse import quote import webbrowser .... def email_incorrect_phone_numbers(self): email_list = [] for employee in self.employee_dict.values(): if not PhoneNumberFormats.standard_format.search(employee.telephoneNumber): print(employee.telephoneNumber, employee.sn, employee.givenName, employee.mail) email_list.append(employee.mail) recipients = ', '.join(email_list) webbrowser.open("mailto:%s?subject=%s&body=%s" % (recipients, quote("testing"), quote('testing')) ) Any suggestions? Cheers, Victor

    Read the article

  • Running a GWT application (including Applets) inside an IFRAME from an ASP.NET 3.5 app?

    - by Jay Stevens
    We are looking at integrating a full-blown GWT (Google Web Toolkit 2.0) application with an existing ASP.NET 3.5 application. My first gut reaction is that this is a horrible frankenstein idea. However, the customer has insisted that we use this application developed by a third-party. I have almost NO CONTROL over the development of the GWT app. My first thought is to actually attempt to embed this in an iFrame. Because GWT is running under Tomcat/Jakarta, it is hosted on a different server from the .NET app so the iFrame src will be to a URL on the other machine. I need to utilize our own ASP.NET authorization scheme to restrict access to the embedded GWT application. The GWT app also uses embedded java applets, which don't seem to be working right now inside the iframe. The GWT app makes calls to a backend server (using GWT-RPC?). Any major problems with this approach that anyone can see? Will GWT work on an iframe while hosted on a different machine? NOTE: SIMPLY ADDING A DIV WITH THE SAME NAME DOES NOT WORK FOR THIS!

    Read the article

  • How can I create objects based on dump file memory in a WinDbg extension?

    - by pj4533
    I work on a large application, and frequently use WinDbg to diagnose issues based on a DMP file from a customer. I have written a few small extensions for WinDbg that have proved very useful for pulling bits of information out of DMP files. In my extension code I find myself dereferencing c++ class objects in the same way, over and over, by hand. For example: Address = GetExpression("somemodule!somesymbol"); ReadMemory(Address, &addressOfPtr, sizeof(addressOfPtr), &cb); // get the actual address ReadMemory(addressOfObj, &addressOfObj, sizeof(addressOfObj), &cb); ULONG offset; ULONG addressOfField; GetFieldOffset("somemodule!somesymbolclass", "somefield", &offset); ReadMemory(addressOfObj+offset, &addressOfField, sizeof(addressOfField), &cb); That works well, but as I have written more extensions, with greater functionality (and accessing more complicated objects in our applications DMP files), I have longed for a better solution. I have access to the source of our own application of course, so I figure there should be a way to copy an object out of a DMP file and use that memory to create an actual object in the debugger extension that I can call functions on (by linking in dlls from our application). This would save me the trouble of pulling things out of the DMP by hand. Is this even possible? I tried obvious things like creating a new object in the extension, then overwriting it with a big ReadMemory directly from the DMP file. This seemed to put the data in the right fields, but freaked out when I tried to call a function. I figure I am missing something...maybe c++ pulls some vtable funky-ness that I don't know about? My code looks similar to this: SomeClass* thisClass = SomeClass::New(); ReadMemory(addressOfObj, &(*thisClass), sizeof(*thisClass), &cb);

    Read the article

  • Calling functions from the Immediate Window during debugging -- advanced Visual Studio kung-fu test

    - by kizzx2
    I see some related questions have been asked, but they're either too advanced for me to grasp or lacking a step-by-step guide from start to finish (most of them end up being insider talk of their own experiment results). OK here it is, given this simple program: #include <stdio.h> #include <string.h> int main() { FILE * f; char buffer[100]; memset(buffer, 0, 100); fun(); f = fopen("main.cpp", "r"); fread(buffer, 1, 99, f); printf(buffer); fclose(f); return 0; } What it does is basically print itself (assume file name is main.cpp). Question How can I have it print another file, say foobar.txt without modifying the source code? It has something to do with running it through VS's, stepping through the functions and hijacking the FILE pointer right before fread() is called. No need to worry about leaking resources by calling fclose(). I tried the simple f = fopen("foobar.txt", "r") which gave CXX0017: Error: symbol "fopen" not found Any ideas?

    Read the article

  • How to generate random numbers of lognormal distribution within specific range in Matlab

    - by Harpreet
    My grain sizes are defined as D=[1.19,1.00,0.84,0.71,0.59,0.50,0.42]. The problem is described below in steps. Grain sizes should follow lognormal distribution. The mean of the grain sizes is fixed as 0.84 and the standard deviation should be as low as possible but not zero. 90% of the grains (by weight %) fall in the size range of 1.19 to 0.59, and the rest 10% fall in size range of 0.50 to 0.42. Now I want to find the probabilities (weight percentage) of the grains falling in each grain size. It is allowable to split this grain size distribution into further small sizes but it must always be in the range of 1.19 and 0.42, i.e. 'D' can be continuous but 0.42 < D < 1.19. I need it fast. I tried on my own but I am not able to get the correct result. I am getting negative probabilities (weight percentages). Thanks to anyone who helps. I didn't incorporate the point 3 as I came to know about that condition later. Here are simple steps I tried: %% D=[1.19,1.00,0.84,0.71,0.59,0.50,0.42]; s=0.30; % std dev of the lognormal distribution m=0.84; % mean of the lognormal distribution mu=log(m^2/sqrt(s^2+m^2)); % mean of the associated normal dist. sigma=sqrt(log((s^2/m^2)+1)); % std dev of the associated normal dist. [r,c]=size(D); for i=1:c D(i)=mu+(sigma.*randn(1)); w(i)=(log(D(i))-mu)/sigma; % the probability or the wt. percentage of the grain sizes end grain_size=exp(D); %%

    Read the article

  • Using Hidden Markov Model for designing AI mp3 player

    - by Casper Slynge
    Hey guys. Im working on an assignment, where I want to design an AI for a mp3 player. The AI must be trained and designed with the use of a HMM method. The mp3 player shall have the functionality of adapting to its user, by analyzing incoming biological sensor data, and from this data the mp3 player will choose a genre for the next song. Given in the assignment is 14 samples of data: One sample consist of Heart Rate, Respiration, Skin Conductivity, Activity and finally the output genre. Below is the 14 samples of data, just for you to get an impression of what im talking about. Sample HR RSP SC Activity Genre S1 Medium Low High Low Rock S2 High Low Medium High Rock S3 High High Medium Low Classic S4 High Medium Low Medium Classic S5 Medium Medium Low Low Classic S6 Medium Low High High Rock S7 Medium High Medium Low Classic S8 High Medium High Low Rock S9 High High Low Low Classic S10 Medium Medium Medium Low Classic S11 Medium Medium High High Rock S12 Low Medium Medium High Classic S13 Medium High Low Low Classic S14 High Low Medium High Rock My time of work regarding HMM is quite low, so my question to you is if I got the right angle on the assignment. I have three different states for each sensor: Low, Medium, High. Two observations/output symbols: Rock, Classic In my own opinion I see my start probabilities as the weightened factors for either a Low, Medium or High state in the Heart Rate. So the ideal solution for the AI is that it will learn these 14 sets of samples. And when a users sensor input is received, the AI will compare the combination of states for all four sensors, with the already memorized samples. If there exist a matching combination, the AI will choose the genre, and if not it will choose a genre according to the weightened transition probabilities, while simultaniously updating the transition probabilities with the new data. Is this a right approach to take, or am I missing something ? Is there another way to determine the output probability (read about Maximum likelihood estimation by EM, but dont understand the concept)? Best regards, Casper

    Read the article

  • BN_hex2bn magically segfaults in openSSL

    - by xunil154
    Greetings, this is my first post on stackoverflow, and i'm sorry if its a bit long. I'm trying to build a handshake protocol for my own project and am having issues with the server converting the clients RSA's public key to a Bignum. It works in my clent code, but the server segfaults when attempting to convert the hex value of the clients public RSA to a bignum. I have already checked that there is no garbidge before or after the RSA data, and have looked online, but i'm stuck. header segment: typedef struct KEYS { RSA *serv; char* serv_pub; int pub_size; RSA *clnt; } KEYS; KEYS keys; Initializing function: // Generates and validates the servers key /* code for generating server RSA left out, it's working */ //Set client exponent keys.clnt = 0; keys.clnt = RSA_new(); BN_dec2bn(&keys.clnt->e, RSA_E_S); // RSA_E_S contains the public exponent Problem code (in Network::server_handshake): // *Recieved an encrypted message from the network and decrypt into 'buffer' (1024 byte long)* cout << "Assigning clients RSA" << endl; // I have verified that 'buffer' contains the proper key if (BN_hex2bn(&keys.clnt->n, buffer) < 0) { Error("ERROR reading server RSA"); } cout << "clients RSA has been assigned" << endl; The program segfaults at BN_hex2bn(&keys.clnt->n, buffer) with the error (valgrind output) Invalid read of size 8 at 0x50DBF9F: BN_hex2bn (in /usr/lib/libcrypto.so.0.9.8) by 0x40F23E: Network::server_handshake() (Network.cpp:177) by 0x40EF42: Network::startNet() (Network.cpp:126) by 0x403C38: main (server.cpp:51) Address 0x20 is not stack'd, malloc'd or (recently) free'd Process terminating with default action of signal 11 (SIGSEGV) Access not within mapped region at address 0x20 at 0x50DBF9F: BN_hex2bn (in /usr/lib/libcrypto.so.0.9.8) And I don't know why it is, Im using the exact same code in the client program, and it works just fine. Any input is greatly appriciated!

    Read the article

  • Is it possible to write map/reduce jobs for Amazon Elastic MapReduce using .NET?

    - by Chris
    Is it possible to write map/reduce jobs for Amazon Elastic MapReduce (http://aws.amazon.com/elasticmapreduce/) using .NET languages? In particular I would like to use C#. Preliminary research suggests not. The above URL's marketing text suggests you have a "choice of Java, Ruby, Perl, Python, PHP, R, or C++", without mentioning .NET languages. This Amazon thread (http://developer.amazonwebservices.com/connect/thread.jspa?messageID=136051 -- "Support for C# / F# map/reducers") explicitly says that "currently Amazon Elastic MapReduce does not support Mono platform or languages such as C# or F#." The above suggests that it can't be done. I'm wondering if there are any workarounds, though. For example, can I modify the Elastic MapReduce machine image for my account, and install Mono on there? An alternative, suggested by Amazon FAQs "Using Other Software Required by Your Jar" (http://docs.amazonwebservices.com/ElasticMapReduce/latest/DeveloperGuide/index.html?CHAP_AdvancedTopics.html) and "How to Use Additional Files and Libraries With the Mapper or Reducer" (http://docs.amazonwebservices.com/ElasticMapReduce/latest/DeveloperGuide/index.html?addl_files.html), is to make the first step of the Map/Reduce job be to install Mono on the local instance. That sounds kind of inefficient, but maybe it could work? Maybe a saner alternative would be to try to forgo the convenience of Elastic MapReduce, and manually set up my own Hadoop cluster on EC2. Then I assume I could install Mono without difficulty.

    Read the article

  • Where should you put 3rd party .NET dlls when using git submodules to avoid duplication

    - by Tim Abell
    I have two .NET library projects in Visual Studio 2008 that both make use of the MySql Connector for .NET (MySql.Data.dll). These libraries are then in turn both used by a .NET command line application which also uses the Connector. The library projects are pulled in to the application's solution as git submodules and referenced by project in Visual Studio. I'm looking for the most effective strategy for storing and referencing the MySql Connector library. I have tried having the MySql.Data.dll checked in to all three projects (in their root folder), this was problematic when one project changed to a newer version of the connector dll. Although each project had its own version of the dll, only one was packaged into the resultant application leading to an API mismatch which was hard to pin down. This has put me off this approach. I have tried having the command line application reference the connector dll that is held in a submodule, however this only removes the possibility of version mismatches when there is only one submodule rather than two as in this case. I am contemplating putting the dll in the global assembly cache (GAC) of all machines that need to build or use the application, but I'm wary of not having all dependencies for an application available in source control.

    Read the article

  • Java generic Comparable where subclasses can't compare to eachother

    - by dege
    public abstract class MyAbs implements Comparable<MyAbs> This would work but then I would be able to compare class A and B with each other if they both extend MyAbs. What I want to accomplish however is the exact opposite. So does anyone know a way to get the generic type to be the own class? Seemed like such a simple thing at first... Edit: To explain it a little further with an example. Say you have an abstract class animals, then you extend it with Dogs and ants. I wouldn't want to compare ants with Dogs but I however would want to compare one dog with another. The dog might have a variable saying what color it is and that is what I want to use in the compareTo method. However when it comes to ants I would rather want to compare ant's size than their color. Hope that clears it up. Could possibly be a design flaw however.

    Read the article

  • Extand legacy site with another server-side programming platform best practice

    - by Andrew Florko
    Company I work for have a site developed 6-8 years ago by a team that was enthusiastic enough to use their own private PHP-based CMS. I have to put dynamic data from one intranet company database on this site in one week: 2-3 pages. I contacted company site administrator and she showed me administrative part - CMS allows only to insert html blocks & manage site map (site is deployed on machine that is inside company & fully accessible & upgradable). I'm not a PHP-guy & I don't want to dive into legacy hardly-who-ever-heard-about CMS engine I also don't want to contact developers team, 'cos I'm not sure they are still present and capable enough to extend this old days site and it'll take too much time anyway. I am about to deploy helper asp.net site on iis with 2-3 pages required & refer helper site via iframe from present site. New pages will allow to download some dynamic content from present site also. Is it ok and what are the pitfalls with iframe approach?

    Read the article

  • Windows Workflow Foundation: Recommendations how to design architecture

    - by Petr Felzmann
    We are running several the same ASP.NET applications (one per customer) based on our custom framework (libraries). Each application use its own database (Initial Catalog in the term of connection string). Now we would like to add workflow capability (of course 4.0 ;) to the applications. So the particular workflows will be the same for all the applications only some initial settings of each workflow can vary, e.g. in one application the e-mail will be send to the user X, but in other application to the user Y. I have several general questions how to design architecture: (1) Can be the workflow database shared for all the applications? (2) Where to host workflow engine - inside our custom windows NT service or inside IIS? What are the criteria to choose the right host? (3) How the workflow engine should communicate with applications? Should application call some WCF endpoint API configured in workflow host or vice verse - should each application provide WCF endpoint API and workflow engine will call it? How then the workflow engine will identify applications? Both cases requires probably some application identifier as a parameter in API calls? (4) We would like to also store some information to the application databases based on the workflow states. Is it possible? Thanks for suggestions!

    Read the article

  • Automated Oracle Schema Migration Tool

    - by Dave Jarvis
    What are some tools (commercial or OSS) that provide a GUI-based mechanism for creating schema upgrade scripts? To be clear, here are the tool responsibilities: Obtain connection to recent schema version (called "source"). Obtain connection to previous schema version (called "target"). Compare all schema objects between source and target. Create a script to make the target schema equivalent to the source schema ("upgrade script"). Create a rollback script to revert the source schema, used if the upgrade script fails (at any point). Create individual files for schema objects. The software must: Use ALTER TABLE instead of DROP and CREATE for renamed columns. Work with Oracle 10g or greater. Create scripts that can be batch executed (via command-line). Trivial installation process. (Bonus) Create scripts that can be executed with SQL*Plus. Here are some examples (from StackOverflow, ServerFault, and Google searches): Change Manager Oracle SQL Developer Software that does not meet the criteria, or cannot be evaluated, includes: TOAD PL/SQL Developer - Invalid SQL*Plus statements. Does not produce ALTER statements. SQL Fairy - No installer. Complex installation process. Poorly documented. DBDiff - Crippled data set evaluation, poor customer support. OrbitDB - Crippled data set evaluation. SchemaCrawler - No easily identifiable download version for Oracle databases. SQL Compare - SQL Server, not Oracle. LiquiBase - Requires changing the development process. No installer. Manually edit config files. Does not recognize its own baseUrl parameter. The only acceptable crippling of the evaluation version is by time. Crippling by restricting the number of tables and views hides possible bugs that are only visible in the software during the attempt to migrate hundreds of tables and views.

    Read the article

  • Saving and retrieving image in SQL database from C# problem

    - by Mobin
    I used this code for inserting records in a person table in my DB System.IO.MemoryStream ms = new System.IO.MemoryStream(); Image img = Image_Box.Image; img.Save(ms, System.Drawing.Imaging.ImageFormat.Bmp); this.personTableAdapter1.Insert(NIC_Box.Text.Trim(), Application_Box.Text.Trim(), Name_Box.Text.Trim(), Father_Name_Box.Text.Trim(), DOB_TimePicker.Value.Date, Address_Box.Text.Trim(), City_Box.Text.Trim(), Country_Box.Text.Trim(), ms.GetBuffer()); but when i retrieve this with this code byte[] image = (byte[])Person_On_Application.Rows[0][8]; MemoryStream Stream = new MemoryStream(); Stream.Write(image, 0, image.Length); Bitmap Display_Picture = new Bitmap(Stream); Image_Box.Image = Display_Picture; it works perfectly fine but if i update this with my own generated Query like System.IO.MemoryStream ms = new System.IO.MemoryStream(); Image img = Image_Box.Image; img.Save(ms, System.Drawing.Imaging.ImageFormat.Bmp); Query = "UPDATE Person SET Person_Image ='"+ms.GetBuffer()+"' WHERE (Person_NIC = '"+NIC_Box.Text.Trim()+"')"; the next time i use the same code for retrieving the image and displaying it as used above . Program throws an exception

    Read the article

  • How to parse a custom XML-style error code response from a website

    - by user1870127
    I'm developing a program that queries and prints out open data from the local transit authority, which is returned in the form of an XML response. Normally, when there are buses scheduled to run in the next few hours (and in other typical situations), the XML response generated by the page is handled correctly by the java.net.URLConnection.getInputStream() function, and I am able to print the individual results afterwards. The problem is when the buses are NOT running, or when some other problem with my queries develops after it is sent to the transit authority's web server. When the authority developed their service, they came up with their own unique error response codes, which are also sent as XMLs. For example, one of these error messages might look like this: <Error xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <Code>3005</Code> <Message>Sorry, no stop estimates found for given values.</Message> </Error> (This code and similar is all that I receive from the transit authority in such situations.) However, it appears that URLConnection.getInputStream() and some of its siblings are unable to interpret this custom code as a "valid" response that I can handle and print out as an error message. Instead, they give me a more generic HTTP/1.1 404 Not Found error. This problem cascades into my program which then prints out a java.io.FileNotFoundException error pointing to the offending input stream. My question is therefore two-fold: 1. Is there a way to retrieve, parse, and print a custom XML-formatted error code sent by a web service using the plugins that are available in Java? 2. If the above is not possible, what other tools should I use or develop to handle such custom codes as described?

    Read the article

  • Can parser combinators be made efficient?

    - by Jon Harrop
    Around 6 years ago, I benchmarked my own parser combinators in OCaml and found that they were ~5× slower than the parser generators on offer at the time. I recently revisited this subject and benchmarked Haskell's Parsec vs a simple hand-rolled precedence climbing parser written in F# and was surprised to find the F# to be 25× faster than the Haskell. Here's the Haskell code I used to read a large mathematical expression from file, parse and evaluate it: import Control.Applicative import Text.Parsec hiding ((<|>)) expr = chainl1 term ((+) <$ char '+' <|> (-) <$ char '-') term = chainl1 fact ((*) <$ char '*' <|> div <$ char '/') fact = read <$> many1 digit <|> char '(' *> expr <* char ')' eval :: String -> Int eval = either (error . show) id . parse expr "" . filter (/= ' ') main :: IO () main = do file <- readFile "expr" putStr $ show $ eval file putStr "\n" and here's my self-contained precedence climbing parser in F#: let rec (|Expr|) (P(f, xs)) = Expr(loop (' ', f, xs)) and loop = function | ' ' as oop, f, ('+' | '-' as op)::P(g, xs) | (' ' | '+' | '-' as oop), f, ('*' | '/' as op)::P(g, xs) -> let h, xs = loop (op, g, xs) let op = match op with | '+' -> (+) | '-' -> (-) | '*' -> (*) | '/' -> (/) loop (oop, op f h, xs) | _, f, xs -> f, xs and (|P|) = function | '('::Expr(f, ')'::xs) -> P(f, xs) | c::xs when '0' <= c && c <= '9' -> P(int(string c), xs) My impression is that even state-of-the-art parser combinators waste a lot of time back tracking. Is that correct? If so, is it possible to write parser combinators that generate state machines to obtain competitive performance or is it necessary to use code generation?

    Read the article

  • Best practice for writing ARRAYS

    - by Douglas
    I've got an array with about 250 entries in it, each their own array of values. Each entry is a point on a map, and each array holds info for: name, another array for points this point can connect to, latitude, longitude, short for of name, a boolean, and another boolean The array has been written by another developer in my team, and he has written it as such: names[0]=new Array; names[0][0]="Campus Ice Centre"; names[0][1]= new Array(0,1,2); names[0][2]=43.95081811364498; names[0][3]=-78.89848709106445; names[0][4]="CIC"; names[0][5]=false; names[0][6]=false; names[1]=new Array; names[1][0]="Shagwell's"; names[1][1]= new Array(0,1); names[1][2]=43.95090307839151; names[1][3]=-78.89815986156464; names[1][4]="shg"; names[1][5]=false; names[1][6]=false; Where I would probably have personally written it like this: var names = [] names[0] = new Array("Campus Ice Centre", new Array[0,1,2], 43.95081811364498, -78.89848709106445, "CIC", false, false); names[1] = new Array("Shagwell's", new Array[0,1], 43.95090307839151, -78.89815986156464, 'shg", false, false); They both work perfectly fine of course, but what I'm wondering is: 1) does one take longer than the other to actually process? 2) am I incorrect in assuming there is a benefit to the compactness of my version of the same thing? I'm just a little worried about his 3000 lines of code versus my 3-400 to get the same result. Thanks in advance for any guidance.

    Read the article

  • Delphi Unicode String Type Stored Directly at its Address (or "Unicode ShortString")

    - by Andreas Rejbrand
    I want a string type that is Unicode and that stores the string directly at the adress of the variable, as is the case of the (Ansi-only) ShortString type. I mean, if I declare a S: ShortString and let S := 'My String', then, at @S, I will find the length of the string (as one byte, so the string cannot contain more than 255 characters) followed by the ANSI-encoded string itself. What I would like is a Unicode variant of this. That is, I want a string type such that, at @S, I will find a unsigned 32-bit integer (or a single byte would be enough, actually) containing the length of the string in bytes (or in characters, which is half the number of bytes) followed by the Unicode representation of the string. I have tried WideString, UnicodeString, and RawByteString, but they all appear only to store an adress at @S, and the actual string somewhere else (I guess this has do do with reference counting and such). Update: The most important reason for this is probably that it would be very problematic if sizeof(string) were variable. I suspect that there is no built-in type to use, and that I have to come up with my own way of storing text the way I want (which actually is fun). Am I right? Update I will, among other things, need to use these strings in packed records. I also need manually to read/write these strings to files/the heap. I could live with fixed-size strings, such as <= 128 characters, and I could redesign the problem so it will work with null-terminated strings. But PChar will not work, for sizeof(PChar) = 1 - it's merely an address. The approach I eventually settled for was to use a static array of bytes. I will post my implementation as a solution later today.

    Read the article

  • Adding a minimum display time for Silverlight splash screen.

    - by David
    When hosting a silverlight application on a webpage it is possible to use the splashscreensource parameter to specify a simple Silverlight 1.0 (xaml+javascript) control to be displayed while the real xap file is downloaded, and which can receive notification of the downloads progress through onSourceDownloadProgressChanged. If the xap file is in cache, the splash screen is not shown (and if the download only takes 1 second, the splash screen will only be shown for 1 second). I know this is not best practice in general, but I am looking for a way to specify a minimum display time for the splash screen - even if the xap cached or the download is fast, the splash screen would remain up for at least, let's say, 5 seconds (for example to show a required legal disclaimer, corporate identity mark or other bug). I do want to do it in the splash screen exclusively (rather then in the main xap) as I want it to be clean and uninterupted (for example a sound bug) and shown to the user as soon as they open the page, rather then after the download (which could take anywhere from 1 to 20+ seconds). I'd prefer not to accomplish this with preloading - replacing the splash screen with a full Silverlight xap application (with it's own loading screen), which then programmably loads and displays the full xap after a minimum wait time.

    Read the article

  • Django - how to write users and profiles handling in best way?

    - by SpankMe
    Hey, I am writing simple site that requires users and profiles to be handled. The first initial thought is to use django's build in user handling, but then the user model is too narrow and does not contain fields that I need. The documentation mentions user profiles, but user profiles section has been removed from djangobook covering django 1.0 (ideally, the solution should work with django 1.2), and the Internet is full of different solutions, not making the choice easier (like user model inheritance, user profiles and django signals, and so on). I would like to know, how to write this in good, modern, fast and secure way. Should I try to extend django builtin user model, or maybe should I create my own user model wide enough to keep all the information I need? Below you may find some specifications and expectations from the working solution: users should be able to register and authenticate every user should have profile (or model with all required fields) users dont need django builtin admin panel, but they need to edit their profiles/models via simple web form Please, let me know how do you solve those issues in your applications, and what is the best current way to handle users with django. Any links to articles/blogs or code examples are highly appreciated!

    Read the article

  • database schema eligible for delta synchronization

    - by WilliamLou
    it's a question for discussion only. Right now, I need to re-design a mysql database table. Basically, this table contains all the contract records I synchronized from another database. The contract record can be modified, deleted or users can add new contract records via GUI interface. At this stage, the table structure is exactly the same as the Contract info (column: serial number, expiry date etc.). In that case, I can only synchronize the whole table (delete all old records, replace with new ones). If I want to delta(only synchronize with modified, new, deleted records) synchronize the table, how should I change the database schema? here is the method I come up with, but I need your suggestions because I think it's a common scenario in database applications. 1)introduce a sequence number concept/column: for each sequence, mark the new added records, modified records, deleted records with this sequence number. By recording the last synchronized sequence number, only pass those records with higher sequence number; 2) because deleted contracts can be added back, and the original table has primary key constraints, should I create another table for those deleted records? or add a flag column to indicate if this contract has been deleted? I hope I explain my question clearly. Anyway, if you know any articles or your own suggestions about this, please let me know. Thanks!

    Read the article

  • How do I write a J2EE/EJB Singleton?

    - by Bears will eat you
    A day ago my application was one EAR, containing one WAR, one EJB JAR, and a couple of utility JAR files. I had a POJO singleton class in one of those utility files, it worked, and all was well with the world: EAR |--- WAR |--- EJB JAR |--- Util 1 JAR |--- Util 2 JAR |--- etc. Then I created a second WAR and found out (the hard way) that each WAR has its own ClassLoader, so each WAR sees a different singleton, and things break down from there. This is not so good. EAR |--- WAR 1 |--- WAR 2 |--- EJB JAR |--- Util 1 JAR |--- Util 2 JAR |--- etc. So, I'm looking for a way to create a Java singleton object that will work across WARs (across ClassLoaders?). The @Singleton EJB annotation seemed pretty promising until I found that JBoss 5.1 doesn't seem to support that annotation (which was added as part of EJB 3.1). Did I miss something - can I use @Singleton with JBoss 5.1? Upgrading to JBoss AS 6 is not an option right now. Alternately, I'd be just as happy to not have to use EJB to implement my singleton. What else can I do to solve this problem? Basically, I need a semi-application-wide* hook into a whole bunch of other objects, like various cached data, and app config info. As a last resort, I've already considered merging my two WARs into one, but that would be pretty hellish. *Meaning: available basically anywhere above a certain layer; for now, mostly in my WARs - the View and Controller (in a loose sense).

    Read the article

  • JqGrid updating grid on add

    - by CSharpAtl
    Scenario: I have three columns in my grid but only one is editable, the other two are filled in on the server side. I am using the built in add functionality of jqGrid and NOT refreshing the grid on successfully add. I would like to have the row added to the grid, like it does automatically, but would like to add it myself because it only adds the one column that is marked 'editable'. I cannot seem to find a way to block the row from being automatically added to the grid, or a way to override the built in add functionality in the grid. My idea was to add the row myself because I will have received back the full row of data back on my submit. Questions: Is there a way to stop the grid from automatically adding the row when I do not want a grid refresh, so that I can manually add all the data for the row? Is it possible to use the built in add button and override the onClick, without digging and and directly figuring out what jqGrid calls the button? Any better ideas on how to accomplish getting the row added to the grid from the server side without doing it all manually...ie. create my own add button, and popup a dialog and handle all the submit functionality? EDIT What would help is if I could stop the grid from auto adding the row...I can deal with doing it myself.

    Read the article

< Previous Page | 628 629 630 631 632 633 634 635 636 637 638 639  | Next Page >