Search Results

Search found 3165 results on 127 pages for 'triangle generation'.

Page 106/127 | < Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >

  • Understanding CSRF - Simple Question

    - by byronh
    I know this might make me seem like an idiot, I've read everything there is to read about CSRF and I still don't understand how using a 'challenge token' would add any sort of prevention. Please help me clarify the basic concept, none of the articles and posts here on SO I read seemed to really explicitly state what value you're comparing with what. From OWASP: In general, developers need only generate this token once for the current session. After initial generation of this token, the value is stored in the session and is utilized for each subsequent request until the session expires. If I understand the process correctly, this is what happens. I log in at http://example.com and a session/cookie is created containing this random token. Then, every form includes a hidden input also containing this random value from the session which is compared with the session/cookie upon form submission. But what does that accomplish? Aren't you just taking session data, putting it in the page, and then comparing it with the exact same session data? Seems like circular reasoning. These articles keep talking about following the "same-origin policy" but that makes no sense, because all CSRF attacks ARE of the same origin as the user, just tricking the user into doing actions he/she didn't intend. Is there any alternative other than appending the token to every single URL as a query string? Seems very ugly and impractical, and makes bookmarking harder for the user.

    Read the article

  • Outputing value of TrueClass / FalseClass to integer or string/

    - by Nick Gorbikoff
    Hello. I'm trying to figure out if there is an easy way to do the following short of adding to_i method to TrueClass/FalseClass. Here is a dilemma: I have a boolean field in my rails app - that is obviously stored as Tinyint in mysql. However - I need to generate xml based of the data in mysql and send it to customer - there SOAP service requires the field in question to have 0 or 1 as the value of this field. So at the time of the xml generation I need to convert my False to 0 and my True to 1 ( which is how they are stored in the DB). Since True & False lack to_i method I could write some if statement that generate either 1 or 0 depending on true/false state. However I have about 10 of these indicators and creating and if/else for each is not very DRY. So what you recommend I do? Or I could add a to_i method to the True / False class. But I'm not sure where should I scope it in my rails app? Just inside this particular model or somewhere else?

    Read the article

  • What data structure to use / data persistence

    - by Dave
    I have an app where I need one table of information with the following fields: field 1 - int or char field 2 - string (max 10 char) field 3 - string (max 20 char) field 4 - float I need the program to filter on field 1 based upon a segmented control and select a field 2 from a picker. From this data I need to look up field 4 to use in a calculation. Total records will be about 200. I never see it go above 400 - 500. I am going to use a singleton which I am able to do, I just need help with the structure for this with data persistence. What type of data structure should I use for this and should I use NSNumber, NSString, etc. or old data types like float, Char, etc. I thought about a struct put into an array but there is probably a better way. This is new to me so any help or reference to examples would be great. I also thought about a plist or dictionary but it looks like it is just a lookup and a field which obviously won't work. Core data looked like overkill to me. Also, with any recommendation how should I get initial data into it? I want the user to be able to edit and add to the database. Sorry for the old terms, you can see what generation I am from... Thanks in advance!!!!

    Read the article

  • Compiling and linking libcurl to create a stand alone dll

    - by Haraldo
    Hi, I've managed to compile a dll with the necessary linked libraries (*.lib) and with CURL_STATICLIB set in the preprocessor section among other settings. I'm using "libcurl-7.19.3-win32-ssl-msvc.zip" package and compiling with VS 2008 express. This has been the first version I managed to get compiled properly with no link issues etc. The problem I have now is that my dll needs libcurl.dll to function and this is not ok. My dll needs to be independent. I have no idea how to implement this. I've taken all day just to get what I've got compiled. I've got runtime library set to Multi threaded dll (debug/release) respectively under C/C++ - code generation. I've a number of preprocessor set - CURL_STATICLIB being one of them. Configuration Type is set to Dynamic Library Use of MFC is set to Use MFC in a static library Additional Library Directories is set to the lib folders (debug/release) respectively. I've noticed there is a curllib_static.lib file which I've tried instead of curllib.lib as an additional dependency but it only compiles with the later. This is driving me nuts! So I guess I need some guidance as to how to make my dll completely static so it doesn't have any dependencies. I notice my dll is currently dependent on: CURLLIB.DLL MSVCR90D.DLL As I'm pretty new to C++ it could be a setting I'm missing in VS 2008 but I'm not sure. One person said I should be using a static library with *.a files (libcurl.a) etc but when I do this I get link errors which I haven't been able to resolve. Any guidance here would be much appreciated.

    Read the article

  • Eclipselink update existing tables

    - by javydreamercsw
    Maybe I got it wrong but i though that JPA was able to update an existing table (model changed adding a column) but is not working in my case. I can see in the logs eclipselink attempting to create it but failing because it already exists. Instead of trying an update to add the column it keeps going. (Had to remove some < so it displays) property name="javax.persistence.jdbc.url" value="jdbc:mysql://localhost:3306/jwrestling"/ property name="javax.persistence.jdbc.password" value="password"/ property name="javax.persistence.jdbc.driver" value="com.mysql.jdbc.Driver"/ property name="javax.persistence.jdbc.user" value="user"/ property name="eclipselink.ddl-generation" value="create-tables"/ property name="eclipselink.logging.logger" value="org.eclipse.persistence.logging.DefaultSessionLog"/ property name="eclipselink.logging.level" value="INFO"/ And here's the table with the change (online column added) [EL Warning]: 2010-05-31 14:39:06.044--ServerSession(16053322)--Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.1.0.v20100517-r7246): org.eclipse.persistence.exceptions.DatabaseException Internal Exception: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'account' already exists Error Code: 1050 Call: CREATE TABLE account (ID INTEGER NOT NULL, USERNAME VARCHAR(32) NOT NULL, SECURITY_KEY VARCHAR(255) NOT NULL, EMAIL VARCHAR(64) NOT NULL, STATUS VARCHAR(8) NOT NULL, TIMEDATE DATETIME NOT NULL, PASSWORD VARCHAR(255) NOT NULL, ONLINE TINYINT(1) default 0 NOT NULL, PRIMARY KEY (ID)) Query: DataModifyQuery(sql="CREATE TABLE account (ID INTEGER NOT NULL, USERNAME VARCHAR(32) NOT NULL, SECURITY_KEY VARCHAR(255) NOT NULL, EMAIL VARCHAR(64) NOT NULL, STATUS VARCHAR(8) NOT NULL, TIMEDATE DATETIME NOT NULL, PASSWORD VARCHAR(255) NOT NULL, ONLINE TINYINT(1) default 0 NOT NULL, PRIMARY KEY (ID))") [EL Warning]: 2010-05-31 14:39:06.074--ServerSession(16053322)--Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.1.0.v20100517-r7246): org.eclipse.persistence.exceptions.DatabaseException After this it continues with the following. Am I doing something wrong or is a bug?

    Read the article

  • Syncing two separate structures to the same master data

    - by Mike Burton
    I've got multiple structures to maintain in my application. All link to the same records, and one of them could be considered the "master" in that it reflects actual relationships held in files on disk. The other structures are used to "call out" elements of the main design for purchase and work orders. I'm struggling to come up with a pattern that deals appropriately with changes to the master data. As an example, the following trees might refer to the same data: A |_ B |_ C |_ D |_ E |_ B |_ C |_ D A |_ B E C |_ D A |_ B C D E These secondary structures follow internal rules, but their overall structure is usually user-determined. In all cases (including the master), any element can be used in multiple locations and in multiple trees. When I add a child to any element in the tree, I want to either automatically build the secondary structure for each instance of the "master" element or at least advertise the situation to the user and allow them to manually generate the data required for the secondary trees. Is there any pattern which might apply to this situation? I've been treating it as a view problem, but it turns out to be more complicated than that when you look at the initial generation of the data.

    Read the article

  • Using Zend Framework and Doctrine with independend modular structures

    - by stefax
    I've seen a lot of articles about integrating ZF and Doctrine. There is also a proposal for ZF here but they have always two possible structures. Either they put all models into one top level model directory or they put it into a module related model directory. application |-- Bootstrap.php |-- configs |-- controllers |-- models - EITHER HERE |-- modules | -- examplemodule | |-- controllers | |-- models - OR HERE | |-- views |-- views For our projects I see problems for either of the two options: 1. One directory: application/models - in a complex system after a short time there will be hundreds of files, over all when you have the table classes two (e.g. User.php and UserTable.php). 2. Module based model directories: application/modules/examplemodule/models - in many cases we use models in multiple modules at the same time. So the "User" is required e.g. in the modules "game", "administration", ... Is there a way to use some kind of sub directories under the top level directory "models" to get some grouping. It should be completely independent of the module structure. application |-- Bootstrap.php ... |-- models | -- user | |-- User.php | |-- Friend.php | |-- other user related models | -- game | |-- Game.php | |-- Score.php | |-- ... ... Any solution should support autoloading and the class generation from yaml files. Any ideas, links or solutions? Thanks!

    Read the article

  • Going from small to medium sized websites.

    - by Landitus
    I've been coding websites for a couple of years now, mostly in php and xhtml. I come from the design world, but I'm proud of doing standart compliant websites and great interfaces. Also used Wordpress and loved it. Most of the time there were really simple commercial websites, with no database included, where everything is done from scratch. Every page is parsed through an index?page=xxx and But I have a few prospects that are larger websites (let's call them 'medium sized websites') where I feel I'm lacking the following: How to dispach or render the pages (MVC controller instead of index?page=???) Proper page hierarchy and easy breadcrumbs implementation Auto generation of navigation menu, or an easy way to maintain them? Clean URLs Form validation Easy database support I really don't know if I should be looking into php scripts, and refine my skills or get into a CMS (like drupal) or a PHP framework. I found Wordpress very assuring and didn't feel trapped into crazy conventions, but I feel is not the right tool for this. I hate the CMS Page with the big textbox as I am used to code every page by hand my pages are not a title and a textbox. Got the feeling? My php skills are sort of medium/low still, but I would like to hear some thoughts of what I should learn to take the next step!

    Read the article

  • DAL Layer : EF 4.0 or Normal Data access layer with Stored Procedure

    - by Harryboy
    Hello Experts, Application : I am working on one mid-large size application which will be used as a product, we need to decide on our DAL layer. Application UI is in Silverlight and DAL layer is going to be behind service layer. We are also moving ahead with domain model, so our DB tables and domain classes are not having same structure. So patterns like Data Mapper and Repository will definitely come into picture. I need to design DAL Layer considering below mentioned factors in priority manner Speed of Development with above average performance Maintenance Future support and stability of the technology Performance Limitation : 1) As we need to strictly go ahead with microsoft, we can not use NHibernate or any other ORM except EF 4.0 2) We can use any code generation tool (Should be Open source or very cheap) but it should only generate code in .Net, so there would not be any licensing issue on per copy basis. Questions I read so many articles about EF 4.0, on outset it looks like that it is still lacking in features from NHibernate but it is considerably better then EF 1.0 So, Do you people feel that we should go ahead with EF 4.0 or we should stick to ADO .Net and use any code geneartion tool like code smith or any other you feel best Also i need to answer questions like what time it will take to port application from EF 4.0 to ADO .Net if in future we stuck up with EF 4.0 for some features or we are having serious performance issue. In reverse case if we go ahead and choose ADO .Net then what time it will take to swith to EF 4.0 Lastly..as i was going through the article i found the code only approach (with POCO classes) seems to be best suited for our requirement as switching is really easy from one technology to other. Please share your thoughts on the same and please guide on the above questions

    Read the article

  • Linking error while using Qt static built libraries

    - by Kamran Amini
    I hope this is not a duplicate. Recently I'm developing a native C++ application using Qt 4.8.3 and VS2008. Since clients run the application on their naked machines, they need to install VC++ 2008 Redistribution package. So I decided to make it statically linked. I changed my project settings (C/C++ Code Generation Runtime Library) to /MTd. Also I compiled Qt again, this time using following commands for a static building; originally found on this blog Static Qt with static CRT (VS 2008) 1- replaced -MD with -MT in lines QMAKE_CFLAGS_RELEASE and QMAKE_CFLAGS_DEBUG in %QDIR%\mkspecs\win32-msvc2008\qmake.conf 2- nmake confclean 3- configure -static -platform win32-msvc2008 -no-webkit 4- nmake sub-src I compiled Qt successfully. But when I tried again to compile my application, it gave me some strange errors. 1>Linking... 1>qtmaind.lib(qtmain_win.obj) : error LNK2005: "public: bool __thiscall QBasicAtomicInt::deref(void)" (?deref@QBasicAtomicInt@@QAE_NXZ) already defined in QtCored4.lib(QtCored4.dll) 1>qtmaind.lib(qtmain_win.obj) : error LNK2005: "public: bool __thiscall QBasicAtomicInt::operator!=(int)const " (??9QBasicAtomicInt@@QBE_NH@Z) already defined in QtCored4.lib(QtCored4.dll) 1>qtmaind.lib(qtmain_win.obj) : error LNK2005: "public: __thiscall QString::~QString(void)" (??1QString@@QAE@XZ) already defined in QtCored4.lib(QtCored4.dll) I changed some lib files but with each change, situation got worse; for example I tried to use QtCored.lib instead of QtCored4.lib because it is newly created after compilation. I think I've missed something in building static Qt libs. Thanks.

    Read the article

  • Entity Framework 4 / POCO - Where to start?

    - by Basiclife
    Hi, I've been programming for a while and have used LINQ-To-SQL and LINQ-To-Entities before (although when using entities it has been on a Entity/Table 1-1 relationship - ie not much different than L2SQL) I've been doing a lot of reading about Inversion of Control, Unit of Work, POCO and repository patterns and would like to use this methodology in my new applications. Where I'm struggling is finding a clear, concise beginners guide for EF4 which doesn't assume knowledge of EF1. The specific questions I need answered are: Code first / model first? Pros/cons in regards to EF4 (ie what happens if I do code first, change the code at a later date and need to regenerate my DB model - Does the data get preserved and transformed or dropped?) Assuming I'm going code-first (I'd like to see how EF4 converts that to a DB schema) how do I actually get started? Quite often I've seen articles with entity diagrams stating "So this is my entity model, now I'm going to ..." - Unfortunately, I'm unclear if they're created the model in the designer, saved it to generate code then stopped any further auto-code generation -or- They've coded (POCO)? classes and the somehow imported them into the deisgner view? I suppose what I really need is an understanding of where the "magic" comes from and how to add it myself if I'm not just generating an EF model directly from a DB. I'm aware the question is a little vague but I don't know what I don't know - So any input / correction / clarification appreciated. Needless to say, I don't expect anyone to sit here and teach me EF - I'd just like some good tutorials/forums/blogs/etc. for complete entity newbies Many thanks in advance

    Read the article

  • VS2010: "Select Resource" dialog & resx location

    - by Dav
    Got two issues with the VS2010 / VS2008 select resource dialog - the one that appears when you want to add an image to a button in a WinForms app for example. Give me my files back! It only seems to see the default project resources file (Properties\Resources.resx), and resx files in project root (say MyProject\famfamfam.resx). We have quite a few icons all over the app, and because some of them come from different icon sets (like famfamfam), and some are related to this project only we'd like to keep them separate. For that same reason (keeping solution neat & tidy) we want to store these extra resource files in the Resources folder (eg. Resources\famfamfam.resx). However, we'd also like to keep using the Select Resource dialog :-) Because it does not see the 'extra' resource files, we're having to select a 'fake' icon now (from the global Resources.resx file) and then manually change that to reference the right icon in .Designer.cs. As you can imagine, this is a pain. Stop modifying my files! Second issue is a bit more annoying. We use the excellent MultiLang add-in for Visual Studio to globalize our app. It stores its translations in MultiLang.resx & MultiLang.XY.resx files in the project root, where XY is a language code, eg. .cs.resx for Czech. These have to be set to No code generation access modifier. What Select Resource seems to be doing is set all .resx files it can find to Internal. Exec summary Is there a way to convince Select Resource dialog to look for extra .resx files anywhere besides the project root? Is there any way to stop it from modifying the access modifier of the resources it does see (other than file a bug with MS)? Thanks in advance for any suggestions!

    Read the article

  • Hybrid EAV/CR model via WCF (and statically-typed language)?

    - by Pat
    Background I'm working on the architecture for a cloud-based LOB application, using Silverlight for the client, WCF, ASP.NET/C# for server and SQL Server for storage. The data model requires some flexibility per user (ability to add custom properties and define validation rules for them, for example), and a hybrid EAV/CR persistence model on the server side will suit nicely. Problem I need an efficient and maintainable technology and approach to handle the transformation from the persisted EAV model to/from WCF (and similarly allow the client to bind to the resulting data - DataGrid is a key UI element)? Admission: I don't yet know enough about WCF to understand if it supports ExpandoObject directly, but I suspect it will. Options I started off looking at WCF RIA services, but quickly discovered they're heavily dependent upon both static type data and compile-time code generation. Neither of these appeal. The options I'm considering include: Using WCF RIA services and pass the data over the network directly in EAV form (i.e. Dictionary), and handle the binding issue purely on the client side (like this) Using a dynamic language (probably IronPython) to handle both ends of the communication, with plumbing to generate the necessary CLR type data on the client to allow binding, and transform to/from EAV form on the server (spam preventer stopped me from posting a URL here, I'll try it in a comment). Dynamic LINQ (CreateClass() and friends), although I'm way out of my depth there and don't know what the limitations on that approach might be yet. I'm interested in comments on these approaches as well as alternative approaches that might solve the problem. Other Notes The Silverlight client will not be the only consumer of the service, making me slightly uncomfortable with option #1 above. While the data model is flexible, it's not expected to be modified heavily. For argument's sake, we could assume that we might have 25 distinct data models active at a given time, with something like 10-20 unique data fields/rules each. Modifications to the data model will happen infrequently (typically when a new user is initially configured).

    Read the article

  • Converting TrueClass / FalseClass to integer.

    - by Nick Gorbikoff
    Hello. I'm trying to figure out if there is an easy way to do the following short of adding to_i method to TrueClass/FalseClass. Here is a dilemma: I have a boolean field in my rails app - that is obviously stored as Tinyint in mysql. However - I need to generate xml based of the data in mysql and send it to customer - there SOAP service requires the field in question to have 0 or 1 as the value of this field. So at the time of the xml generation I need to convert my False to 0 and my True to 1 ( which is how they are stored in the DB). Since True & False lack to_i method I could write some if statement that generate either 1 or 0 depending on true/false state. However I have about 10 of these indicators and creating and if/else for each is not very DRY. So what you recommend I do? Or I could add a to_i method to the True / False class. But I'm not sure where should I scope it in my rails app? Just inside this particular model or somewhere else?

    Read the article

  • Thread Local Memory, Using std::string's internal buffer for c-style Scratch Memory.

    - by Hassan Syed
    I am using Protocol Buffers and OpensSSL to generate, HMACs and then CBC encrypt the two fields to obfuscate the session cookies -- similar Kerberos tokens. Protocol Buffers' API communicates with std::strings and has a buffer caching mechanism; I exploit the caching mechanism, for successive calls in the the same thread, by placing it in thread local memory; additionally the OpenSSL HMAC and EVP CTX's are also placed in the same thread local memory structure ( see this question for some detail on why I use thread local memory and the massive amount of speedup it enables even with a single thread). The generation and deserialization, "my algorithms", of these cookie strings uses intermediary void *s and std::strings and since Protocol Buffers has an internal memory retention mechanism I want these characteristics for "my algorithms". So how do I implement a common scratch memory ? I don't know much about the rdbuf(streambuf - strinbuf ??) of the std::string object. I would presumeably need to grow it to the lowest common size ever encountered during the execution of "my algorithms". Thoughts ? My question I guess would be: " is the internal buffer of a string re-usable, and if so, how ?" Edit: See comments to Vlad's answer please.

    Read the article

  • jmap -histo is missing a lot of memory

    - by ripper234
    I have a JVM with 12 gigs of total RAM, out of which 7 GB is allocated to the old generation. There seems to be some memory leak, because almost the entire old gen is full, and will not release when I schedule a GC (the process is not doing anything else at that time). A jmap -histo dump only reveals less than 1 gigabyte worth of objects. Where are the missing 6 gigs? What better tool do you propose for diagnosing this? Here is the top of the jmap output: num #instances #bytes class name ---------------------------------------------- 1: 429853 68725736 <constMethodKlass> 2: 429853 51594040 <methodKlass> 3: 37503 49611368 <constantPoolKlass> 4: 37503 31109576 <instanceKlassKlass> 5: 191716 28019968 [C 6: 32573 26933152 <constantPoolCacheKlass> 7: 86158 13789560 [I 8: 53532 11244232 [B 9: 284 10507216 [J 10: 137608 7210664 <symbolKlass> 11: 203072 6498304 java.lang.String 12: 10132 5219512 <methodDataKlass> 13: 39694 4128176 java.lang.Class 14: 55713 3792816 [S 15: 61816 3141936 [[I 16: 90109 2883488 java.util.HashMap$Entry

    Read the article

  • Is there a good way of automatically generating javascript client code from server side python

    - by tat.wright
    I basically want to be able to: Write a few functions in python (with the minimum amount of extra meta data) Turn these functions into a web service (with the minimum of effort / boiler plate) Automatically generate some javascript functions / objects for rpc (this should prevent me from doing as many stupid things as possible like mistyping method names, forgetting the names of methods, passing the wrong number of arguments) Example python: def hello_world(): return "Hello world" javascript: ... <!-- This file is automatically generated (either dynamically or statically) --> <script src="http://myurl.com/webservice/client_side_javascript"> </script> ... <script> $('#button').click(function () { hello_world(function (data){ $('#label').text(data))) } </script> A bit of research has shown me some approaches that come close to this: Automatic generation of json-rpc services from functions with a little boiler plate code in python and then using jquery and json to do the calls (still easy to make mistakes with method names - still need to be aware of urls when calling, very irritating to write these calls yourself in the firebug shell) Using a library like soaplib to generate wsdl from python (by adding copious type information). And then somehow convert this into javascript (not sure if there is even a library to do this) But are there any approaches closer to what I want?

    Read the article

  • I like the way they Design/Architecture it but how do I implement this

    - by Rachel
    Summary: I have different components on homepage and each components shows some promotion to the user. I have Cart as one Component and depending upon content of the cart promotion are show. I have to track user online activities and send that information to Omniture for Report Generation. Now my components are loaded asynchronously basically are loaded when AjaxRequest is fired up and so there is not fix pattern or rather information on when components will appear on the webpages. Now in order to pass information to Omniture I need to call track function on $(document).(ready) and append information for each components(7 parameters are required by Omniture for each component). So in the init:config function of each component am calling Omniture and passing paramters but now no. of Omniture calls is directly proportional to no. of Components on the webpage but this is not acceptable as each call to Omniture is very expensive. Now I am looking for a way where in I can club the information about 7 parameters and than make one Call to Omniture wherein I pass those information. Points to note is that I do not know when the components are loaded and so there is no pre-defined time or no. of components that would be loaded. The thing is am calling track function when document is ready but components are loaded after call to Omniture has been made and so my question is Q: How can I collect the information for all the components and than just make one call to Omniture to send those information ? As mentioned, I do not know when the components are loaded as they are done on the Ajax Request. Hope I am able to explain my challenge and would appreciate if some one can provide from Design/Architect Solutions for the Challenge.

    Read the article

  • Generate a set of strings with maximum edit distance

    - by Kevin Jacobs
    Problem 1: I'd like to generate a set of n strings of fixed length m from alphabet s such that the minimum Levenshtein distance (edit distance) between any two strings is greater than some constant c. Obviously, I can use randomization methods (e.g., a genetic algorithm), but was hoping that this may be a well-studied problem in computer science or mathematics with some informative literature and an efficient algorithm or three. Problem 2: Same as above except that adjacent characters cannot repeat; the i'th character in each string may not be equal to the i+1'th character. E.g., 'CAT', 'AGA' and 'TAG' are allowed, 'GAA', 'AAT', and 'AAA' are not. Background: The basis for this problem is bioinformatic and involves designing unique DNA tags that can be attached to biologically derived DNA fragments and then sequenced using a fancy second generation sequencer. The goal is to be able to recognize each tag, allowing for random insertion, deletion, and substitution errors. The specific DNA sequencing technology has a relatively low error rate per base (~1%), but is less precise when a single base is repeated 2 or more times (motivating the additional constraints imposed in problem 2).

    Read the article

  • Python - Why use anything other than uuid4() for unique strings?

    - by orokusaki
    I see quit a few implementations of unique string generation for things like uploaded image names, session IDs, et al, and many of them employ the usage of hashes like SHA1, or others. I'm not questioning the legitimacy of using custom methods like this, but rather just the reason. If I want a unique string, I just say this: >>> import uuid >>> uuid.uuid4() 07033084-5cfd-4812-90a4-e4d24ffb6e3d And I'm done with it. I wasn't very trusting before I read up on uuid, so I did this: >>> import uuid >>> s = set() >>> for i in range(5000000): # That's 5 million! >>> s.add(uuid.uuid4()) ... ... >>> len(s) 5000000 Not one repeater (I didn't expect one considering the odds are like 1.108e+50, but it's comforting to see it in action). You could even half the odds by just making your string by combining 2 uuid4()s. So, with that said, why do people spend time on random() and other stuff for unique strings, etc? Is there an important security issue or other regarding uuid?

    Read the article

  • Slow Databinding setup time in C# .NET 4.0

    - by Svisstack
    Hello, I have got a problem. I have windows forms application with dynamic generated layout, but i have a problem in performance. In this form i use DataBinding from .NET 4.0 and databinding after setup works fine, but he binding setup time for ONE control blocking my application on approx 0.7 second. I have some controls and time of binging setuping is around 2 minutes. I trying all possible solutions, I dont have any ideas without write self binding class. Why is wrong with my code? case "Boolean": { Binding b = new Binding("Checked", __bindingsource, __ep.Name); CheckBox cb = new CheckBox(); /* * HERE is the problem */ cb.DataBindings.Add(b); /* * HERE is the end of problem */ __flp.Controls.Add(cb); __bindingcontrol.AddBinding(b); break; } Without problem code lines all works fast and without binding ;-( but i want binding turn on in normal speed. PS1. I have suspended layout in generation time. PS2. I have same problem with binding TextBox'es, PictureBoxe's, CheckBox is only example. How to do that?

    Read the article

  • How do I apply a "template" or "skeleton" of code in C# here?

    - by Scott Stafford
    In my business layer, I need many, many methods that follow the pattern: public BusinessClass PropertyName { get { if (this.m_LocallyCachedValue == null) { if (this.Record == null) { this.m_LocallyCachedValue = new BusinessClass( this.Database, this.PropertyId); } else { this.m_LocallyCachedValue = new BusinessClass( this.Database, this.Record.ForeignKeyName); } } return this.m_LocallyCachedValue; } } I am still learning C#, and I'm trying to figure out the best way to write this pattern once and add methods to each business layer class that follow this pattern with the proper types and variable names substituted. BusinessClass is a typename that must be substituted, and PropertyName, PropertyId, ForeignKeyName, and m_LocallyCachedValue are all variables that should be substituted for. Are attributes usable here? Do I need reflection? How do I write the skeleton I provided in one place and then just write a line or two containing the substitution parameters and get the pattern to propagate itself? EDIT: Modified my misleading title -- I am hoping to find a solution that doesn't involve code generation or copy/paste techniques, and rather to be able to write the skeleton of the code once in a base class in some form and have it be "instantiated" into lots of subclasses as the accessor for various properties. EDIT: Here is my solution, as suggested but left unimplemented by the chosen answerer. // I'll write many of these... public BusinessClass PropertyName { get { return GetSingleRelation(ref this.m_LocallyCachedValue, this.PropertyId, "ForeignKeyName"); } } // That all call this. public TBusinessClass GetSingleRelation<TBusinessClass>( ref TBusinessClass cachedField, int fieldId, string contextFieldName) { if (cachedField == null) { if (this.Record == null) { ConstructorInfo ci = typeof(TBusinessClass).GetConstructor( new Type[] { this.Database.GetType(), typeof(int) }); cachedField = (TBusinessClass)ci.Invoke( new object[] { this.Database, fieldId }); } else { var obj = this.Record.GetType().GetProperty(objName).GetValue( this.Record, null); ConstructorInfo ci = typeof(TBusinessClass).GetConstructor( new Type[] { this.Database.GetType(), obj.GetType()}); cachedField = (TBusinessClass)ci.Invoke( new object[] { this.Database, obj }); } } return cachedField; }

    Read the article

  • RFC: Whitespace's Assembly Mnemonics

    - by Noctis Skytower
    Request For Comment regarding Whitespace's Assembly Mnemonics What follows in a first generation attempt at creating mnemonics for a whitespace assembly language. STACK ===== push number copy copy number swap away away number MATH ==== add sub mul div mod HEAP ==== set get FLOW ==== part label call label goto label zero label less label back exit I/O === ochr oint ichr iint In the interest of making improvements to this small and simple instruction set, this is a second attempt. hold N Push the number onto the stack copy Duplicate the top item on the stack copy N Copy the nth item on the stack (given by the argument) onto the top of the stack swap Swap the top two items on the stack drop Discard the top item on the stack drop N Slide n items off the stack, keeping the top item add Addition sub Subtraction mul Multiplication div Integer Division mod Modulo save Store load Retrieve L: Mark a location in the program call L Call a subroutine goto L Jump unconditionally to a label if=0 L Jump to a label if the top of the stack is zero if<0 L Jump to a label if the top of the stack is negative return End a subroutine and transfer control back to the caller exit End the program print chr Output the character at the top of the stack print int Output the number at the top of the stack input chr Read a character and place it in the location given by the top of the stack input int Read a number and place it in the location given by the top of the stack What is the general consensus on the following revised list for Whitespace's assembly instructions? They definitely come from thinking outside of the box and trying to come up with a better mnemonic set than last time. When the previous python interpreter was written, it was completed over two contiguous, rushed evenings. This rewrite deserves significantly more time now that it is the summer. Of course, the next version of Whitespace (0.4) may have its instructions revised even more, but this is just a redesign of what originally was done in a few hours. Hopefully, the instructions make more sense to those new to programming jargon.

    Read the article

  • Can parser combinators be made efficient?

    - by Jon Harrop
    Around 6 years ago, I benchmarked my own parser combinators in OCaml and found that they were ~5× slower than the parser generators on offer at the time. I recently revisited this subject and benchmarked Haskell's Parsec vs a simple hand-rolled precedence climbing parser written in F# and was surprised to find the F# to be 25× faster than the Haskell. Here's the Haskell code I used to read a large mathematical expression from file, parse and evaluate it: import Control.Applicative import Text.Parsec hiding ((<|>)) expr = chainl1 term ((+) <$ char '+' <|> (-) <$ char '-') term = chainl1 fact ((*) <$ char '*' <|> div <$ char '/') fact = read <$> many1 digit <|> char '(' *> expr <* char ')' eval :: String -> Int eval = either (error . show) id . parse expr "" . filter (/= ' ') main :: IO () main = do file <- readFile "expr" putStr $ show $ eval file putStr "\n" and here's my self-contained precedence climbing parser in F#: let rec (|Expr|) (P(f, xs)) = Expr(loop (' ', f, xs)) and loop = function | ' ' as oop, f, ('+' | '-' as op)::P(g, xs) | (' ' | '+' | '-' as oop), f, ('*' | '/' as op)::P(g, xs) -> let h, xs = loop (op, g, xs) let op = match op with | '+' -> (+) | '-' -> (-) | '*' -> (*) | '/' -> (/) loop (oop, op f h, xs) | _, f, xs -> f, xs and (|P|) = function | '('::Expr(f, ')'::xs) -> P(f, xs) | c::xs when '0' <= c && c <= '9' -> P(int(string c), xs) My impression is that even state-of-the-art parser combinators waste a lot of time back tracking. Is that correct? If so, is it possible to write parser combinators that generate state machines to obtain competitive performance or is it necessary to use code generation?

    Read the article

  • How to develop modular web UIs with Django?

    - by nh2
    When doing larger sites in "big business", you most probalbly work in a team with several developers. Let's say dev A makes a form to insert new user data, B creates a user list, C makes some privilege administration and D does crazy statistic graphs work with image generation and so on. Each dev begins to develop his own component, creates a view and a template and tests that independently, until each component works. Now, the client wants to have all those components on one bit HTML page. How to achieve this? How to assemble different views/templates in a form of composition so that they remain modular and can be developed and tested independently? It seems inheritance is not the way to go because all of those UI components are equal and there is no hierarchy. The idea of the assembling template is something like <html> <head> // include the css for the components and their assembly </head> <body> // include user data form here <some containers, images, and so on> // show user list // show privilege administration in this part // and finally, the nice statistic graphs // perhaps, we want to display some other components here in future </body> </html> I have not found an answer on the net yet. Most people come up with one big template which just implements all of the UI functionality, removing all modularity. Each component shall have its own template and view dealing only with that component developed by one person each, and they then shall be sticked together just like bricks. I would highly appreciate any hints!

    Read the article

< Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >