Search Results

Search found 7086 results on 284 pages for 'explain'.

Page 226/284 | < Previous Page | 222 223 224 225 226 227 228 229 230 231 232 233  | Next Page >

  • Protecting Content with AuthLogic

    - by Rob Wilkerson
    I know this sounds like a really, really simple use case and I'm hoping that it is, but I swear I've looked all over the place and haven't found any mention of any way - not even the best way - of doing this. I'm brand-spanking new to Ruby, Rails and everything surrounding either (which may explain a lot). The dummy app that I'm using as my learning tool requires authentication in order to do almost anything meaningful, so I chose to start by solving that problem. I've installed the AuthLogic gem and have it working nicely to the extent that is covered by the intro documentation and Railscast, but now that I can register, login and logout...I need to do something with it. As an example, I need to create a page where users can upload images. I'm planning to have an ImagesController with an upload action method, but I want that only accessible to logged in users. I suppose that in every restricted action I could add code to redirect if there's no current_user, but that seems really verbose. Is there a better way of doing this that allows me to define or identify restricted areas and handle the authentication check in one place?

    Read the article

  • Wrapper Classes for Backward compatibility in Java

    - by Casebash
    There is an interesting article here on maintaing backwards compatibility for Java. In the wrapper class section, I can't actually understand what the wrapper class accomplishes. In the following code from MyApp, WrapNewClass.checkAvailable() could be replaced by Class.forName("NewClass"). static { try { WrapNewClass.checkAvailable(); mNewClassAvailable = true; } catch (Throwable ex) { mNewClassAvailable = false; } } Consider when NewClass is unavailable. In the code where we use the wrapper (see below), all we have done is replace a class that doesn't exist, with one that exists, but which can't be compiled as it uses a class that doesn't exist. public void diddle() { if (mNewClassAvailable) { WrapNewClass.setGlobalDiv(4); WrapNewClass wnc = new WrapNewClass(40); System.out.println("newer API is available - " + wnc.doStuff(10)); }else { System.out.println("newer API not available"); } } Can anyone explain why this makes a difference? I assume it has something to do with how Java compiles code - which I don't know much about.

    Read the article

  • Output is different for R-value and L-value. Why?

    - by Leonid Volnitsky
    Can someone explain to me why output for R-value is different from L-value? #include <iostream> #include <vector> using namespace std; template<typename Ct> struct ct_wrapper { Ct&& ct; // R or L ref explicit ct_wrapper(Ct&& ct) : ct(std::forward<Ct>(ct)) { std::cout << this->ct[1];}; }; int main() { // L-val vector<int> v{1,2,3}; ct_wrapper<vector<int>&> lv(v); cout << endl << lv.ct[0] << lv.ct[1] << lv.ct[2] << endl; // R-val ct_wrapper<vector<int>&&> rv(vector<int>{1,2,3}); cout << endl << rv.ct[0] << rv.ct[1] << rv.ct[2] << endl; } Output (same for gcc48 and clang32): 2 123 2 003

    Read the article

  • Virtual destructor - How does it work?

    - by Prabhu
    Hello All, Few hours back I was fiddling with a Memory Leak issue and it turned out that I really got some basic stuff about virtual destructor wrong!! Let me put explain my class design. class Base { virtual push_elements()<br>{}<br> }; class Derived:public Base { vector<int> x; public: void push_elements(){ for(int i=0;i <5;i++) x.push_back(i); } }; void main() { Base* b = new Derived(); b->push_elements(); delete b; } The bounds checker tool reported a memory leak in the derived class vector. And I figured out that the destructor is not virtual and the derived class destructor is not called.And it surprisingly got fixed when I made the destructor virtual. But my question is "isn't the vector deallocated automatically even if the derived class destructor is not called"? Is that a quirk in BoundsChecker tool or is my understanding of virtual destructor is wrong:)

    Read the article

  • Does unboxing just return a pointer to the value within the boxed object on the heap?

    - by Charles
    I this MSDN Magazine article, the author states (emphasis mine): Note that boxing always creates a new object and copies the unboxed value's bits to the object. On the other hand, unboxing simply returns a pointer to the data within a boxed object: no memory copy occurs. However, it is commonly the case that your code will cause the data pointed to by the unboxed reference to be copied anyway. I'm confused by the sentence I've bolded and the sentence that follows it. From everything else I've read, including this MSDN page, I've never before heard that unboxing just returns a pointer to the value on the heap. I was under the impression that unboxing would result in you having a variable containing a copy of the value on the stack, just as you began with. After all, if my variable contains "a pointer to the value on the heap", then I haven't got a value type, I've got a pointer. Can someone explain what this means? Was the author on crack? (There is at least one other glaring error in the article). And if this is true, what are the cases where "your code will cause the data pointed to by the unboxed reference to be copied anyway"? I just noticed that the article is nearly 10 years old, so maybe this is something that changed very early on in the life of .Net.

    Read the article

  • MySQL "OR MATCH" hangs (very slow) on multiple tables

    - by Kerry
    After learning how to do MySQL Full-Text search, the recommended solution for multiple tables was OR MATCH and then do the other database call. You can see that in my query below. When I do this, it just gets stuck in a "busy" state, and I can't access the MySQL database. SELECT a.`product_id`, a.`name`, a.`slug`, a.`description`, b.`list_price`, b.`price`, c.`image`, c.`swatch`, e.`name` AS industry, MATCH( a.`name`, a.`sku`, a.`description` ) AGAINST ( '%s' IN BOOLEAN MODE ) AS relevance FROM `products` AS a LEFT JOIN `website_products` AS b ON (a.`product_id` = b.`product_id`) LEFT JOIN ( SELECT `product_id`, `image`, `swatch` FROM `product_images` WHERE `sequence` = 0) AS c ON (a.`product_id` = c.`product_id`) LEFT JOIN `brands` AS d ON (a.`brand_id` = d.`brand_id`) INNER JOIN `industries` AS e ON (a.`industry_id` = e.`industry_id`) WHERE b.`website_id` = %d AND b.`status` = %d AND b.`active` = %d AND MATCH( a.`name`, a.`sku`, a.`description` ) AGAINST ( '%s' IN BOOLEAN MODE ) OR MATCH ( d.`name` ) AGAINST ( '%s' IN BOOLEAN MODE ) GROUP BY a.`product_id` ORDER BY relevance DESC LIMIT 0, 9 Any help would be greatly appreciated. EDIT All the tables involved are MyISAM, utf8_general_ci. Here's the EXPLAIN SELECT statement: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY a ALL NULL NULL NULL NULL 16076 Using temporary; Using filesort 1 PRIMARY b ref product_id product_id 4 database.a.product_id 2 1 PRIMARY e eq_ref PRIMARY PRIMARY 4 database.a.industry_id 1 1 PRIMARY <derived2> ALL NULL NULL NULL NULL 23261 1 PRIMARY d eq_ref PRIMARY PRIMARY 4 database.a.brand_id 1 Using where 2 DERIVED product_images ALL NULL NULL NULL NULL 25933 Using where I don't know how to make that look neater -- sorry about that UPDATE it returns the query after 196 seconds (I think correctly). The query without multiple tables takes about .56 seconds (which I know is really slow, we plan on changing to solr or sphinx soon), but 196 seconds?? If we could add a number to the relevance if it was in the brand name ( d.name ), that would also work

    Read the article

  • Using an SHA1 with Microsoft CAPI

    - by Erik Jõgi
    I have an SHA1 hash and I need to sign it. The CryptSignHash() method requires a HCRYPTHASH handle for signing. I create it and as I have the actual hash value already then set it: CryptCreateHash(cryptoProvider, CALG_SHA1, 0, 0, &hash); CryptSetHashParam(hash, HP_HASHVAL, hashBytes, 0); The hashBytes is an array of 20 bytes. However the problem is that the signature produced from this HCRYPTHASH handle is incorrect. I traced the problem down to the fact that CAPI actually doesn't use all 20 bytes from my hashBytes array. For some reason it thinks that SHA1 is only 4 bytes. To verify this I wrote this small program: HCRYPTPROV cryptoProvider; CryptAcquireContext(&cryptoProvider, NULL, NULL, PROV_RSA_FULL, 0); HCRYPTHASH hash; HCRYPTKEY keyForHash; CryptCreateHash(cryptoProvider, CALG_SHA1, keyForHash, 0, &hash); DWORD hashLength; CryptGetHashParam(hash, HP_HASHSIZE, NULL, &hashLength, 0); printf("hashLength: %d\n", hashLength); And this prints out hashLength: 4 ! Can anyone explain what I am doing wrong or why Microsoft CAPI thinks that SHA1 is 4 bytes (32 bits) instead of 20 bytes (160 bits).

    Read the article

  • What caused the rails application crash?

    - by so1o
    I'm sure someone can explain this. we have an application that has been in production for an year. recently we saw an increase in number of support requests for people having difficulty signing into the system. after scratching our head because we couldn't recreate the problem in development, we decided we'll switch on debug logger in production for a month. that was june 5th. application worked fine with the above change and we were waiting. then yesterday we noticed that the log files were getting huge so we made another change in production config.logger = Logger.new("#{RAILS_ROOT}/log/production.log", 50, 1048576) after this change, the application started crashing while processing a particular file. this particular line of code was RAILS_DEFAULT_LOGGER.info "Payment Information Request: ", request.inspect as you can see there was a comma instead of a plus sign. this piece of code was introduced in Mar. the question is this: why did the application fail now? if changing the debug level caused the application to process this line of code it should have started failing on june 5th! why today. please someone help us. Are we missing the obvious here? if you dont have an answer, at least let us know we aren't the only one that are bonkers.

    Read the article

  • prolog - infinite rule

    - by Tom
    I have the next rules % Signature: natural_number(N)/1 % Purpose: N is a natural number. natural_number(0). natural_number(s(X)) :- natural_number(X) ackermann(0, N, s(N)). //rule 1 ackermann(s(M),0,Result):- ackermann(M,s(0),Result). //rule 2 ackermann(s(M),s(N),Result):-ackermann(M,Result1,Result),ackermann(s(M),N,Result1). //rule 3 The query is: ackermann (M,N,s(s(0))). Now, as I understood, In the third calculation, we got an infinite search (failture branch). I check it, and I got a finite search (failture branch). I'll explain: In the first, we got a substitue of M=0, N=s(0) (rule 1 - succsess!). In the second, we got a substitue of M=s(0),N=0 (rule 2 - sucsses!). But what now? I try to match M=s(s(0)) N=0, But it got a finite search - failture branch. Why the comipler doesn't write me "fail". Thank you.

    Read the article

  • Tail recursion and memoization with C#

    - by Jay
    I'm writing a function that finds the full path of a directory based on a database table of entries. Each record contains a key, the directory's name, and the key of the parent directory (it's the Directory table in an MSI if you're familiar). I had an iterative solution, but it started looking a little nasty. I thought I could write an elegant tail recursive solution, but I'm not sure anymore. I'll show you my code and then explain the issues I'm facing. Dictionary<string, string> m_directoryKeyToFullPathDictionary = new Dictionary<string, string>(); ... private string ExpandDirectoryKey(Database database, string directoryKey) { // check for terminating condition string fullPath; if (m_directoryKeyToFullPathDictionary.TryGetValue(directoryKey, out fullPath)) { return fullPath; } // inductive step Record record = ExecuteQuery(database, "SELECT DefaultDir, Directory_Parent FROM Directory where Directory.Directory='{0}'", directoryKey); // null check string directoryName = record.GetString("DefaultDir"); string parentDirectoryKey = record.GetString("Directory_Parent"); return Path.Combine(ExpandDirectoryKey(database, parentDirectoryKey), directoryName); } This is how the code looked when I realized I had a problem (with some minor validation/massaging removed). I want to use memoization to short circuit whenever possible, but that requires me to make a function call to the dictionary to store the output of the recursive ExpandDirectoryKey call. I realize that I also have a Path.Combine call there, but I think that can be circumvented with a ... + Path.DirectorySeparatorChar + .... I thought about using a helper method that would memoize the directory and return the value so that I could call it like this at the end of the function above: return MemoizeHelper( m_directoryKeyToFullPathDictionary, Path.Combine(ExpandDirectoryKey(database, parentDirectoryKey)), directoryName); But I feel like that's cheating and not going to be optimized as tail recursion. Any ideas? Should I be using a completely different strategy? This doesn't need to be a super efficient algorithm at all, I'm just really curious. I'm using .NET 4.0, btw. Thanks!

    Read the article

  • Android OS 2.2 Permissions: I have absolutely no idea why this simple piece of code doesn't work. Wh

    - by Kevin
    I'm just playing around with some code. I create an Activity and simply do something like this: long lo = currentTimeMillis(); System.out.println(lo); lo *= 3; System.out.println(lo); SystemClock.setCurrentTimeMillis(lo); System.out.println( currentTimeMillis() ); Yes, in my AndroidManifest.xml, I've added: <uses-permission android:name="android.permission.SET_TIME"></uses-permission> <uses-permission android:name="android.permission.SET_TIME_ZONE"></uses-permission> Nothing changes. The SystemClock is never reset...it just keeps on ticking. The error that I'm getting just says that the permission "SET_TIME" was not granted to the program. Protection level 3. The permissions are there...and in the API for 2.2 it says that this feature is supported now. I have no idea what I'm doing wrong. If android.content.Intent; comes into play, please explain. I don't really understand what the idea behind intents! Thanks for any help!

    Read the article

  • Finding part of a string that a user has sent via POST

    - by blerh
    My users can send links from popular file hosts like Rapidshare, Megaupload, Hotfile and FileFactory. I need to somehow find out what filehost they sent the link from and use the correct class for it appropriately. For example, if I sent a Rapidshare link in a form on my web page, I need to somehow cycle through each file host that I allow until I find the text rapidshare.com, then I know the user has posted a Rapidshare link. Perhaps a PHP example: switch($_POST['link']) { case strstr($_POST['link'], 'rapidshare.com'): // the link is a Rapidshare one break; case strstr($_POST['link'], 'megaupload.com'): // the link is a Megaupload one break; case strstr($_POST['link'], 'hotfile.com'): // the link is a Hotfile one break; case strstr($_POST['link'], 'filefactory.com'): // the link is a Filefactory one break; } However, I know for a fact this isn't correct and I'd rather not use a huge IF statement if I can help it. Does anyone have any solution to this problem? If you need me to explain more I can try, English isn't my native language so it's kinda hard. Thanks all.

    Read the article

  • How to create Automation Add In Formula/Function and Excel Add In buttons (vsto) for them together?

    - by ticky
    Ok, let me explain it little bit better. Here is one example how to create formula/functions http://blogs.msdn.com/b/eric_carter/archive/2004/12/01/273127.aspx?PageIndex=1#comments I implemented something like that, I even added values in registry, so that this Automation AddIn doesn't have to be added manually in Excel, but automatically.. I created SETUP project for this project and it works GREAT. Then.. After some time, I wanted to create buttons in Excel for functions that I use. Those are custom functions, using some web services. I created Excel AddIn and added Ribbon with buttons - one button = one custom function. I can publish this project and I am creating VSTO, so this way, I can install excel ribbon buttons in custom group of mine. Now, I have 2 installations, first for Automation AddIn and second for Excel AddIn. How can I connect them? I tried to include VSTO to Setup - something like this: [I WILL ADD IT LATER] When I install it, it works great, it installs both parts. But when I install on my friends computer, it doesn't shows Ribbon buttons. What could be the problem? If there is some other way to integrate those two, I would be very grateful!!!!! Thanks! Tijana

    Read the article

  • Creating WPF components in background thread

    - by mizipzor
    Im working on a reporting system, a series of DocumentPage are to be created through a DocumentPaginator. These documents include a number of WPF components that are to be instantiated so the paginator includes the correct things when later sent to the XpsDocumentWriter (which in turn is sent to the actual printer). My problem now is that the DocumentPage instances take quite a while to create (enough for Windows to mark the application as frozen) so I tried to create them in a background thread, which is problematic since WPF expects the attributes on them to be set from the GUI thread. I would also like to have a progress bar showing up, indicating how many pages have been created so far. Thus, it looks like Im trying to get two things to happen in parallell on the GUI. The problem is hard to explain and Im really not sure how to tackle it. In short: Create a series of DocumentPage's. These include WPF components These are to be created on a background thread, or use some other trick so the application isnt frozen. After each page is created, a WPF ProgressBar should be updated. If there is no decent way to do this, alternate solutions and approaches are more than welcome.

    Read the article

  • Two entities with @ManyToOne should join the same table

    - by Ivan Yatskevich
    I have the following entities Student @Entity public class Student implements Serializable { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; //getter and setter for id } Teacher @Entity public class Teacher implements Serializable { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; //getter and setter for id } Task @Entity public class Task implements Serializable { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; @ManyToOne(optional = false) @JoinTable(name = "student_task", inverseJoinColumns = { @JoinColumn(name = "student_id") }) private Student author; @ManyToOne(optional = false) @JoinTable(name = "student_task", inverseJoinColumns = { @JoinColumn(name = "teacher_id") }) private Teacher curator; //getters and setters } Consider that author and curator are already stored in DB and both are in the attached state. I'm trying to persist my Task: Task task = new Task(); task.setAuthor(author); task.setCurator(curator); entityManager.persist(task); Hibernate executes the following SQL: insert into student_task (teacher_id, id) values (?, ?) which, of course, leads to null value in column "student_id" violates not-null constraint Can anyone explain this issue and possible ways to resolve it?

    Read the article

  • P/Invoke declarations should not be safe-critical

    - by Bobrovsky
    My code imports following native methods: DeleteObject, GetFontData and SelectObject from gdi32.dll GetDC and ReleaseDC from user32.dll I want to run the code in full trust and medium trust environments (I am fine with exceptions being thrown when these imported methods are indirectly used in medium trust environments). When I run Code Analysis on the code I get warnings like: CA5122 P/Invoke declarations should not be safe-critical. P/Invoke method 'GdiFont.DeleteObject(IntPtr)' is marked safe-critical. Since P/Invokes may only be called by critical code, this declaration should either be marked as security critical, or have its annotation removed entirely to avoid being misleading. Could someone explain me (in layman terms) what does this warning really mean? I tried putting these imports in static SafeNativeMethods class as internal static methods but this doesn't make the warnings go away. I didn't try to put them in NativeMethods because after reading this article I am unsure that it's the right way to go because I don't want my code to be completely unusable in medium trust environments (I think this will be the consequence of moving imports to NativeMethods). Honestly, I am pretty much confused about the real meaning of the warning and consequences of different options to suppressing it. Could someone shed some light on all this? EDIT: My code target .NET 2.0 framework. Assembly is marked with [assembly: AllowPartiallyTrustedCallers] Methods are declared like this: [DllImport("gdi32")] internal static extern int DeleteObject(HANDLE hObject);

    Read the article

  • Help me write my LISP :) LISP environments, Ruby Hashes...

    - by MikeC8
    I'm implementing a rudimentary version of LISP in Ruby just in order to familiarize myself with some concepts. I'm basing my implementation off of Peter Norvig's Lispy (http://norvig.com/lispy.html). There's something I'm missing here though, and I'd appreciate some help... He subclasses Python's dict as follows: class Env(dict): "An environment: a dict of {'var':val} pairs, with an outer Env." def __init__(self, parms=(), args=(), outer=None): self.update(zip(parms,args)) self.outer = outer def find(self, var): "Find the innermost Env where var appears." return self if var in self else self.outer.find(var) He then goes on to explain why he does this rather than just using a dict. However, for some reason, his explanation keeps passing in through my eyes and out through the back of my head. Why not use a dict, and then inside the eval function, when a new "sub-environment" needs to be created, just take the existing dict and update the key/value pairs that need to be updated, and pass that new dict into the next eval? Won't the Python interpreter keep track of the previous "outer" envs? And won't the nature of the recursion ensure that the values are pulled out from "inner" to "outer"? I'm using Ruby, and I tried to implement things this way. Something's not working though, and it might be because of this, or perhaps not. Here's my eval function, env being a regular Hash: def eval(x, env = $global_env) ........ elsif x[0] == "lambda" then ->(*args) { eval(x[2], env.merge(Hash[*x[1].zip(args).flatten(1)])) } ........ end The line that matters of course is the "lambda" one. If there is a difference, what's importantly different between what I'm doing here and what Norvig did with his Env class? If there's no difference, then perhaps someone can enlighten me as to why Norvig uses the Env class. Thanks :)

    Read the article

  • Improving performance in this query

    - by Luiz Gustavo F. Gama
    I have 3 tables with user logins: sis_login = administrators tb_rb_estrutura = coordinators tb_usuario = clients I created a VIEW to unite all these users by separating them by levels, as follows: create view `login_names` as select `n1`.`cod_login` as `id`, '1' as `level`, `n1`.`nom_user` as `name` from `dados`.`sis_login` `n1` union all select `n2`.`id` as `id`, '2' as `level`, `n2`.`nom_funcionario` as `name` from `tb_rb_estrutura` `n2` union all select `n3`.`cod_usuario` as `id`, '3' as `level`, `n3`.`dsc_nome` as `name` from `tb_usuario` `n3`; So, can occur up to three ids repeated for different users, which is why I separated by levels. This VIEW is just to return me user name, according to his id and level. considering it has about 500,000 registered users, this view takes about 1 second to load. too much time, but is becomes very small when I need to return the latest posts on the forum of my website. The tables of the forums return the user id and level, then look for a name in this VIEW. I have registered 18 forums. When I run the query, it takes one second for each forum = 18 seconds. OMG. This page loads every time somebody enter my website. This is my query: select `x`.`forum_id`, `x`.`topic_id`, `l`.`nome` from ( select `t`.`forum_id`, `t`.`topic_id`, `t`.`data`, `t`.`user_id`, `t`.`user_level` from `tb_forum_topics` `t` union all select `a`.`forum_id`, `a`.`topic_id`, `a`.`data`, `a`.`user_id`, `a`.`user_level` from `tb_forum_answers` `a` ) `x` left outer join `login_names` `l` on `l`.`id` = `x`.`user_id` and `l`.`level` = `x`.`user_level` group by `x`.`forum_id` asc USING EXPLAIN: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY <derived2> ALL NULL NULL NULL NULL 6 Using temporary; Using filesort 1 PRIMARY <derived4> ALL NULL NULL NULL NULL 530415 4 DERIVED n1 ALL NULL NULL NULL NULL 114 5 UNION n2 ALL NULL NULL NULL NULL 2 6 UNION n3 ALL NULL NULL NULL NULL 530299 NULL UNION RESULT ALL NULL NULL NULL NULL NULL 2 DERIVED t ALL NULL NULL NULL NULL 3 3 UNION r ALL NULL NULL NULL NULL 3 NULL UNION RESULT ALL NULL NULL NULL NULL NULL Somebody can help me or give a suggestion?

    Read the article

  • structDelete doesn't affect the shallow copy?

    - by Travis
    I was playing around onError so I tried to create an error using a large xml document object. <cfset XMLByRef = variables.parsedXML.XMLRootElement.XMLChildElement> <cfset structDelete(variables.parsedXML, "XMLRootElement")> <cfset startXMLShortLoop = getTickCount()> <cfloop from = "1" to = "#arrayLen(variables.XMLByRef)#" index = "variables.i"> <cfoutput>#variables.XMLByRef[variables.i].id.xmltext#</cfoutput><br /> </cfloop> <cfset stopXMLShortLoop = getTickCount()> I expected to get an error because I deleted the structure I was referencing. From LiveDocs: Variable Assignment - Creates an additional reference, or alias, to the structure. Any change to the data using one variable name changes the structure that you access using the other variable name. This technique is useful when you want to add a local variable to another scope or otherwise change a variable's scope without deleting the variable from the original scope. instead I got 580df1de-3362-ca9b-b287-47795b6cdc17 25a00498-0f68-6f04-a981-56853c0844ed ... ... ... db49ed8a-0ba6-8644-124a-6d6ebda3aa52 57e57e28-e044-6119-afe2-aebffb549342 Looped 12805 times in 297 milliseconds <cfdump var = "#variables#"> Shows there's nothing in the structure, just parsedXML.xmlRoot.xmlName with the value of XMLRootElement. I also tried <cfset structDelete(variables.parsedXML.XMLRootElement, "XMLChildElement")> as well as structClear for both. More information on deleting from the xml document object. http://help.adobe.com/en_US/ColdFusion/9.0/Developing/WSc3ff6d0ea77859461172e0811cbec22c24-78e3.html Can someone please explain my faulty logic? Thanks.

    Read the article

  • Generic version control strategy for select table data within a heavily normalized database

    - by leppie
    Hi Sorry for the long winded title, but the requirement/problem is rather specific. With reference to the following sample (but very simplified) structure (in psuedo SQL), I hope to explain it a bit better. TABLE StructureName { Id GUID PK, Name varchar(50) NOT NULL } TABLE Structure { Id GUID PK, ParentId GUID (FK to Structure), NameId GUID (FK to StructureName) NOT NULL } TABLE Something { Id GUID PK, RootStructureId GUID (FK to Structure) NOT NULL } As one can see, Structure is a simple tree structure (not worried about ordering of children for the problem). StructureName is a simplification of a translation system. Finally 'Something' is simply something referencing the tree's root structure. This is just one of many tables that need to be versioned, but this one serves as a good example for most cases. There is a requirement to version to any changes to the name and/or the tree 'layout' of the Structure table. Previous versions should always be available. There seems to be a few possibilities to tackle this issue, like copying the entire structure, but most approaches causes one to 'loose' referential integrity. Example if one followed this approach, one would have to make a duplicate of the 'Something' record, given that the root structure will be a new record, and have a new ID. Other avenues of possible solutions are looking into how Wiki's handle this or go a lot further and look how proper version control systems work. Currently, I feel a bit clueless how to proceed on this in a generic way. Any ideas will be greatly appreciated. Thanks leppie

    Read the article

  • Boost Asio UDP retrieve last packet in socket buffer

    - by Alberto Toglia
    I have been messing around Boost Asio for some days now but I got stuck with this weird behavior. Please let me explain. Computer A is sending continuos udp packets every 500 ms to computer B, computer B desires to read A's packets with it own velocity but only wants A's last packet, obviously the most updated one. It has come to my attention that when I do a: mSocket.receive_from(boost::asio::buffer(mBuffer), mEndPoint); I can get OLD packets that were not processed (almost everytime). Does this make any sense? A friend of mine told me that sockets maintain a buffer of packets and therefore If I read with a lower frequency than the sender this could happen. ¡? So, the first question is how is it possible to receive the last packet and discard the ones I missed? Later I tried using the async example of the Boost documentation but found it did not do what I wanted. http://www.boost.org/doc/libs/1_36_0/doc/html/boost_asio/tutorial/tutdaytime6.html From what I could tell the async_receive_from should call the method "handle_receive" when a packet arrives, and that works for the first packet after the service was "run". If I wanted to keep listening the port I should call the async_receive_from again in the handle code. right? BUT what I found is that I start an infinite loop, it doesn't wait till the next packet, it just enters "handle_receive" again and again. I'm not doing a server application, a lot of things are going on (its a game), so my second question is, do I have to use threads to use the async receive method properly, is there some example with threads and async receive? Thanks for you attention.

    Read the article

  • How do you deserialize a collection with child collections?

    - by Stuart Helwig
    I have a collection of custom entity objects one property of which is an ArrayList of byte arrays. The custom entity is serializable and the collection property is marked with the following attributes: [XmlArray("Images"), XmlArrayItem("Image",typeof(byte[]))] So I serialize a collection of these custom entities and pass them to a web service, as a string. The web service receives the string and byte array in tact, The following code then attempts to deserialize the collection - back into custom entities for processing... XmlSerializer ser = new XmlSerializer(typeof(List<myCustomEntity>)); StringReader reader = new StringReader(xmlStringPassedToWS); List<myCustomEntity> entities = (List<myCustomEntity>)ser.Deserialize(reader); foreach (myCustomEntity e in entities) { // ...do some stuff... foreach (myChildCollection c in entities.ChildCollection { // .. do some more stuff.... } } I've checked the XML resulting from the initial serialization and it does contain byte array - the child collection, as does the StringReader built above. After the deserialization process, the resulting collection of custom entites is fine, except that each object in the collection does not contain any items in its child collection. (i.e. it doesn't get to "...do some more stuff..." above. Can someone please explain what I am doing wrong? Is it possible to serialize ArrayLists within a generic collection of custom entities?

    Read the article

  • How to add deploy.jar to classpath?

    - by dma_k
    I am facing the problem: I need to add ${java.home}/lib/deploy.jar JAR file to classpath in the runtime (dynamically from java). The solution with Thread#setContextClassLoader(ClassLoader) (mentioned here) does not work because of this bug (if somebody can explain what is really a problem – you are welcome). The solution with -Xbootclasspath/a:"%JAVA_HOME%/jre/lib/deploy.jar" does not work well for me, because I want to have "pure executable jar" as a deliverable: no wrapping scripts please (more over %JAVA_HOME% may not be defined in user's environment in Windows for example, plus I need to write a script per platform) The solution with merging deploy.jar file into my deliverable works only if I make a build on Windows platform. Unfortunately, when the deliverable is produced on build server running on Linux, I got Linux-dependant JAR, which does not execute on Windows – it fails with the trace below. I have read How the Java Launcher Finds Classes and Java programming dynamics: Java classes and class loading articles but I've got no extra ideas, how to correctly handle this situation. Any advices or solutions are very welcomed. Trace: java.lang.NoClassDefFoundError: Could not initialize class com.sun.deploy.config.Config at com.sun.deploy.net.proxy.UserDefinedProxyConfig.getBrowserProxyInfo(UserDefinedProxyConfig.java:43) at com.sun.deploy.net.proxy.DynamicProxyManager.reset(DynamicProxyManager.java:235) at com.sun.deploy.net.proxy.DeployProxySelector.reset(DeployProxySelector.java:59) ... java.lang.NullPointerException at com.sun.deploy.net.proxy.DynamicProxyManager.getProxyList(DynamicProxyManager.java:63) at com.sun.deploy.net.proxy.DeployProxySelector.select(DeployProxySelector.java:166)

    Read the article

  • wordpress generating slow mysql queries - is it index problem?

    - by tash
    Hello Stack Overflow I've got very slow Mysql queries coming up from my wordpress site. It's making everything slow and I think this is eating up CPU usage. I've pasted the Explain results for the two most frequently problematic queries below. This is a typical result - although very occasionally teh queries do seem to be performed at a more normal speed. I have the usual wordpress indexes on the database tables. You will see that one of the queries is generated from wordpress core code, and not from anything specific - like the theme - for my site. I have a vague feeling that the database is not always using the indexes/is not using them properly... Is this right? Does anyone know how to fix it? Or is it a different problem entirely? Many thanks in advance for any help anyone can offer - it is hugely appreciated Query: [wp-blog-header.php(14): wp()] SELECT SQL_CALC_FOUND_ROWS wp_posts.* FROM wp_posts WHERE 1=1 AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish' OR wp_posts.post_status = 'private') ORDER BY wp_posts.post_date DESC LIMIT 0, 6 id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE wp_posts ref type_status_date type_status_date 63 const 427 Using where; Using filesort Query time: 34.2829 (ms) 9) Query: [wp-content/themes/LMHR/index.php(40): query_posts()] SELECT SQL_CALC_FOUND_ROWS wp_posts.* FROM wp_posts WHERE 1=1 AND wp_posts.ID NOT IN ( SELECT tr.object_id FROM wp_term_relationships AS tr INNER JOIN wp_term_taxonomy AS tt ON tr.term_taxonomy_id = tt.term_taxonomy_id WHERE tt.taxonomy = 'category' AND tt.term_id IN ('217', '218', '223', '224') ) AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish' OR wp_posts.post_status = 'private') ORDER BY wp_posts.post_date DESC LIMIT 0, 6 id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY wp_posts ref type_status_date type_status_date 63 const 427 Using where; Using filesort 2 DEPENDENT SUBQUERY tr ref PRIMARY,term_taxonomy_id PRIMARY 8 func 1 Using index 2 DEPENDENT SUBQUERY tt eq_ref PRIMARY,term_id_taxonomy,taxonomy PRIMARY 8 antin1_lovemusic2010.tr.term_taxonomy_id 1 Using where Query time: 70.3900 (ms)

    Read the article

  • jQuery('body').text() gives different answers in different browsers

    - by Charles Anderson
    My HTML looks like this: <html> <head> <title>Test</title> <script type="text/javascript" src="jQuery.js"></script> <script type="text/javascript"> function init() { var text = jQuery('body').text(); alert('length = ' + text.length); } </script> </head> <body onload="init()">0123456789</body> </html> When I load this in Firefox, the length is reported as 10. However, in Chrome it's 11 because it thinks there's a linefeed after the '9'. In IE it's also 11, but the last character is an escape. Meanwhile, Opera thinks there are 12 characters, with the last two being CR LF. If I change the body element to include a span: <body onload="init()"><span>0123456789</span></body> and the jQuery call to: var text = jQuery('body span').text(); then all the browsers agree that the length is 10. Clearly it's the body element that's causing the issue, but can anyone explain exactly why this is happening? I'm particularly surprised because the excellent jQuery is normally browser-independent.

    Read the article

< Previous Page | 222 223 224 225 226 227 228 229 230 231 232 233  | Next Page >