Search Results

Search found 5469 results on 219 pages for 'compare and swap'.

Page 180/219 | < Previous Page | 176 177 178 179 180 181 182 183 184 185 186 187  | Next Page >

  • vs10 not deploying all required files - then not over-writing updated files

    - by justSteve
    I'm in the habit of deploying to alternating folders (/inetpub/wwwroot/mySite & /inetpub/wwwroot/mySite2) so if something unexpected happens with the deploy i can quickly swap back to a previous version just by changing the path in IIS So i was deploying an MVC2 webapp to a empty folder figuring that VS would send up all the files it needs. Not even close. Initially, it didn't even upload a couple required nHibernate.dlls. Later, after manually copying files referenced in the thrown exceptions, i just copied all the files from the previous compile and then re-published over the top expecting VS to over-write the changed files. Failed that too. No reports of errors by VS....just failed to over-write a number of pre-existing (but changed/updated) files. Hard to believe these kinds of errors (and lack of feedback that errors were encountered) in a state of the art tool like VS. Clearly, I'm doing something wrong. I'm using VisualSVN for source control and connect to my colocated server via a VPN-based mapped network drive (so I can use FileSystem to publish). (both of which can complicate file properties) VS08 had more choices for which files it would send up - i found i needed to use the 'All files in source' on an initial deployment, the 'Replace Matching'. If I choose 'delete all existing...' I'd be back to square 1 and have to deploy with the 'All files in source project folder'. But VS10 doesn't have the 'All files in source project folder. I ended up manually copying the files - which seems not right in the extreme. Are these known issues others have to deal with? What's best practice for deploying a web-app? thx

    Read the article

  • Write a sql for updating data based on time

    - by Lu Lu
    Hello everyone, Because I am new with SQL Server and T-SQL, so I will need your help. I have 2 table: Realtime and EOD. To understand my question, I give example data for 2 tables: ---Realtime table--- Symbol Date Value ABC 1/3/2009 03:05:01 327 // this day is not existed in EOD -> inserting BBC 1/3/2009 03:05:01 458 // this day is not existed in EOD -> inserting ABC 1/2/2009 03:05:01 326 // this day is new -> updating BBC 1/2/2009 03:05:01 454 // this day is new -> updating ABC 1/2/2009 02:05:01 323 BBC 1/2/2009 02:05:01 453 ABC 1/2/2009 01:05:01 313 BBC 1/2/2009 01:05:01 423 ---EOD table--- Symbol Date Value ABC 1/2/2009 02:05:01 323 BBC 1/2/2009 02:05:01 453 I will need to create a store procedure to update value of symbols. If data in day of a symbol is new (compare between Realtime & EOD), it will update value and date for EOD at that day if existing, otherwise inserting. And store will update EOD table with new data: ---EOD table--- Symbol Date Value ABC 1/3/2009 03:05:01 327 BBC 1/3/2009 03:05:01 458 ABC 1/2/2009 03:05:01 326 BBC 1/2/2009 03:05:01 454 P/S: I use SQL Server 2005. And I have a similar answered question at here: http://stackoverflow.com/questions/2726369/help-to-the-way-to-write-a-query-for-the-requirement Please help me. Thanks.

    Read the article

  • Can iPad/iPhone Touch Points be Wrong Due to Calibration?

    - by Kristopher Johnson
    I have an iPad application that uses the whole screen (that is, UIStatusBarHidden is set true in the Info.plist file). The main window's frame is set to (0, 0, 768, 1024), as is the main view in that frame. The main view has multitouch enabled. The view has code to handle touches: - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { for (UITouch *touch in touches) { CGPoint location = [touch locationInView:nil]; NSLog(@"touchesMoved at location %@", NSStringFromCGPoint(location)); } } When I run the app in the simulator, it works pretty much as expected. As I move the mouse from one edge of the screen to the other, reported X values go from 0 to 767. Reported Y values go from 20 to 1023, but it is a known issue that the simulator doesn't report touches in the top 20 pixels of the screen, even when there is no status bar. Here's what's weird: When I run the app on an actual iPad, the X values go from 0 to 767 as expected, but reported Y values go from -6 to 1017. The fact that it seems to work properly on the simulator leads me to suspect that real devices' touchscreens are not perfectly calibrated, and mine is simply reporting values six pixels too low. Can anyone verify that this is the case? Otherwise, is there anything else that could account for the Y values being six pixels off from what I expect? (In a few days, I should have a second iPad, so I can test this with another device and compare the results.)

    Read the article

  • How can I find out if two arguments are instances of the same, but unknown class?

    - by Ingmar
    Let us say we have a method which accepts two arguments o1 and o2 of type Object and returns a boolean value. I want this method to return true only when the arguments are instances of the same class, e.g.: foo(new Integer(4),new Integer(5)); Should return true, however: foo(new SomeClass(), new SubtypeSomeClass()); should return false and also: foo(new Integer(3),"zoo"); should return false. I believe one way is to compare the fully qualified class names: public boolean foo(Object o1, Object o2){ Class<? extends Object> c1 = o1.getClass(); Class<? extends Object> c2 = o2.getClass(); if(c1.getName().equals(c2.getName()){ return true;} return false; } An alternative conditional statement would be : if (c1.isAssignableFrom(c2) && c2.isAssignableFrom(c1)){ return true; } The latter alternative is rather slow. Are there other alternatives to this problem?

    Read the article

  • Best way to databind a Winforms control to a nullable type?

    - by Steve Hiner
    I'm currently using winforms databinding to wire up a data editing form. I'm using the netTiers framework through CodeSmith to generate my data objects. For database fields that allow nulls it creates nullable types. I've found that using winforms databinding the controls won't bind properly to nullable types. I've seen solutions online suggesting that people create new textbox classes that can handle the nullable types but that could be a pain having to swap out the textboxes on the forms I've already created. Initially I thought it would be great to use an extension method to do it. Basically creating an extension property for the textbox class and bind to that. From my limited extension method experience and doing a bit of checking online it looks like you can't do an extension property. As far as I can tell, binding has to be through a property since it needs to be able to get or set the value so an extension method wouldn't work. I'd love to find a clean way to retrofit these forms using something like extension methods but if I have to create new textbox and combo box controls that's what I'll do. My project is currently limited to .Net 2.0 due to the requirement to run on Windows 2000. Any suggestions?

    Read the article

  • Can't get syntax on to work in my gvim.

    - by user198553
    (I'm new to Linux and Vim, and I'm trying to learn Vim but I'm having some issues with it that I can't seen do fix) I'm in a Linux installation (Ubuntu 8.04) that I can't update, using Vim 7.1.138. My vim installation is in /usr/share/vim/vim71/. /home/user/ My .vimrc file is in /home/user/.vimrc, as follows: fun! MySys() return "linux" endfun set runtimepath=~/.vim,$VIMRUTNTIME source ~/.vim/.vimrc And then, in my /home/user/.vim/.vimrc: " =============== GENERAL CONFIG ============== set nocompatible syntax on " =============== ENCODING AND FILE TYPES ===== set encoding=utf8 set ffs=unix,dos,mac " =============== INDENTING =================== set ai " Automatically set the indent of a new line (local to buffer) set si " smartindent (local to buffer) " =============== FONT ======================== " Set font according to system if MySys() == "mac" set gfn=Bitstream\ Vera\ Sans\ Mono:h13 set shell=/bin/bash elseif MySys() == "windows" set gfn=Bitstream\ Vera\ Sans\ Mono:h10 elseif MySys() == "linux" set gfn=Inconsolata\ 14 set shell=/bin/bash endif " =============== COLORS ====================== colorscheme molokai " ============== PLUGINS ====================== " -------------- NERDTree --------------------- :noremap ,n :NERDTreeToggle<CR> " =============== DIRECTORIES ================= set backupdir=~/.backup/vim set directory=~/.swap/vim ...fact is the command syntax on is not working, neither in vim or gvim. And the strange thing is: If I try to set the syntax using the gvim toolbat, it works. Then, in normal mode in gvim, after activating using the toolbar, using the code :syntax off, it works, and just after doing this trying to do :syntax on doesn't work!! What's going on? Am I missing something?

    Read the article

  • Groovy / Scala / Java under the hood

    - by Jack
    I used Java for like 6-7 years, then some months ago I discovered Groovy and started to save a lot of typing.. then I wondered how certain things worked under the hood (because groovy performance is really poor) and understood that to give you dynamic typing every Groovy object is a MetaClass object that handles all the things that the JVM couldn't handle by itself. Of course this introduces a layer in the middle between what you write and what you execute that slows down everything. Then somedays ago I started getting some infos about Scala. How these two languages compare in their byte code translations? How much things they add to the normal structure that it would be obtained by plain Java code? I mean, Scala is static typed so wrapper of Java classes should be lighter, since many things are checked during compile time but I'm not sure about the real differences of what's going inside. (I'm not talking about the functional aspect of Scala compared to the other ones, that's a different thing) Can someone enlighten me? From WizardOfOdds it seems like that the only way to get less typing and same performance would be to write an intermediate translator that translates something in Java code (letting javac compile it) without alterating how things are executed, just adding synctatic sugar withour caring about other fallbacks of the language itself.

    Read the article

  • Grouping a list of of entities which has a sublist in c#

    - by mikei
    I have a set of Entities which basically has this structure. {Stats Name="<Product name> (en)" TotalResources="10" ..} {DayStats Date="2009-12-10" TotalResources="5"} {DayStats Date="2009-12-11" TotalResources="5"} {Stats} {Stats Name="<Product name> (us)" TotalResources="10" ..} {DayStats Date="2009-12-10" TotalResources="5"} {DayStats Date="2009-12-11" TotalResources="5"} {Stats} ... What I wan't to extract from this set is a new set entities (or entity in the example above) where the first level has been grouped by the (ignoring the country label) where all/some of the properties has been summmed together, including the the sublist of {DayStas} on a per day basis. So the result set of the example would look something like this: {Stats Name="<Product name>" TotalResources="20" ..} {DayStats Date="2009-12-10" TotalResources="10"} {DayStats Date="2009-12-11" TotalResources="10"} {Stats} ... So my question is: Is it possible to do this in a more elegant way in LINQ rather than the vanilla "loop-trhough-each-entity-and-compare"-way? And I wan't the set to contain the same type of entities as the original set ({Stats}, {DayStats}), your answer doesn't need to include code for that, I can probably work that out by myself. Just letting you know incase you need to take that into account. (I'm sorry if this question has been discussed before, I tried searching to no avail. I guess my searching skills are fuubar ;)) Merry christmas =)

    Read the article

  • How can I validate/secure/authenticate a JavaScript-based POST request?

    - by Bungle
    A product I'm helping to develop will basically work like this: A Web publisher creates a new page on their site that includes a <script> from our server. When a visitor reaches that new page, that <script> gathers the text content of the page and sends it to our server via a POST request (cross-domain, using a <form> inside of an <iframe>). Our server processes the text content and returns a response (via JSONP) that includes an HTML fragment listing links to related content around the Web. This response is cached and served to subsequent visitors until we receive another POST request with text content from the same URL, at which point we regenerate a "fresh" response. These POSTs only happen when our cached TTL expires, at which point the server signifies that and prompts the <script> on the page to gather and POST the text content again. The problem is that this system seems inherently insecure. In theory, anyone could spoof the HTTP POST request (including the referer header, so we couldn't just check for that) that sends a page's content to our server. This could include any text content, which we would then use to generate the related content links for that page. The primary difficulty in making this secure is that our JavaScript is publicly visible. We can't use any kind of private key or other cryptic identifier or pattern because that won't be secret. Ideally, we need a method that somehow verifies that a POST request corresponding to a particular Web page is authentic. We can't just scrape the Web page and compare the content with what's been POSTed, since the purpose of having JavaScript submit the content is that it may be behind a login system. Any ideas? I hope I've explained the problem well enough. Thanks in advance for any suggestions.

    Read the article

  • How do I assert that two arbitrary type objects are equivalent, without requiring them to be equal?

    - by Tomas Lycken
    To accomplish this (but failing to do so) I'm reflecting over properties of an expected and actual object and making sure their values are equal. This works as expected as long as their properties are single objects, i.e. not lists, arrays, IEnumerable... If the property is a list of some sort, the test fails (on the Assert.AreEqual(...) inside the for loop). public void WithCorrectModel<TModelType>(TModelType expected, string error = "") where TModelType : class { var actual = _result.ViewData.Model as TModelType; Assert.IsNotNull(actual, error); Assert.IsInstanceOfType(actual, typeof(TModelType), error); foreach (var prop in typeof(TModelType).GetProperties()) { Assert.AreEqual(prop.GetValue(expected, null), prop.GetValue(actual, null), error); } } If dealing with a list property, I would get the expected results if I instead used CollectionAssert.AreEquivalent(...) but that requires me to cast to ICollection, which in turn requries me to know the type listed, which I don't (want to). It also requires me to know which properties are list types, which I don't know how to. So, how should I assert that two objects of an arbitrary type are equivalent? Note: I specifically don't want to require them to be equal, since one comes from my tested object and one is built in my test class to have something to compare with.

    Read the article

  • Ignore case in Python strings

    - by Paul Oyster
    What is the easiest way to compare strings in Python, ignoring case? Of course one can do (str1.lower() <= str2.lower()), etc., but this created two additional temporary strings (with the obvious alloc/g-c overheads). I guess I'm looking for an equivalent to C's stricmp(). [Some more context requested, so I'll demonstrate with a trivial example:] Suppose you want to sort a looong list of strings. You simply do theList.sort(). This is O(n * log(n)) string comparisons and no memory management (since all strings and list elements are some sort of smart pointers). You are happy. Now, you want to do the same, but ignore the case (let's simplify and say all strings are ascii, so locale issues can be ignored). You can do theList.sort(key=lambda s: s.lower()), but then you cause two new allocations per comparison, plus burden the garbage-collector with the duplicated (lowered) strings. Each such memory-management noise is orders-of-magnitude slower than simple string comparison. Now, with an in-place stricmp()-like function, you do: theList.sort(cmp=stricmp) and it is as fast and as memory-friendly as theList.sort(). You are happy again. The problem is any Python-based case-insensitive comparison involves implicit string duplications, so I was expecting to find a C-based comparisons (maybe in module string). Could not find anything like that, hence the question here. (Hope this clarifies the question).

    Read the article

  • Log 2 N generic comparison tree

    - by Morano88
    Hey! I'm working on an algorithm for Redundant Binary Representation (RBR) where every two bits represent a digit. I designed the comparator that takes 4 bits and gives out 2 bits. I want to make the comparison in log 2 n so If I have X and Y .. I compare every 2 bits of X with every 2 bits of Y. This is smooth if the number of bits of X or Y equals n where (n = 2^X) i.e n = 2,4,8,16,32,... etc. Like this : However, If my input let us say is 6 or 10 .. then it becomes not smooth and I have to take into account some odd situations like this : I have a shallow experience in algorithms .. is there a generic way to do it .. so at the end I get only 2 bits no matter what the input is ? I just need hints or pseudo-code. If my question is not appropriate here .. so feel free to flag it or tell me to remove it. I'm using VHDL by the way!

    Read the article

  • TreeMap sort by value

    - by vito huang
    I'm new to java, i want to write an comparator to that will let me sort TreeMap by value instead of the default natural sorting. i tried something like this, but can't find out what went wrong: import java.util.*; class treeMap { public static void main(String[] args) { System.out.println("the main"); byValue cmp = new byValue(); Map<String, Integer> map = new TreeMap<String, Integer>(cmp); map.put("de",10); map.put("ab", 20); map.put("a",5); for (Map.Entry<String,Integer> pair: map.entrySet()) { System.out.println(pair.getKey()+":"+pair.getValue()); } } } class byValue implements Comparator<Map.Entry<String,Integer>> { public int compare(Map.Entry<String,Integer> e1, Map.Entry<String,Integer> e2) { if (e1.getValue() < e2.getValue()){ return 1; } else if (e1.getValue() == e2.getValue()) { return 0; } else { return -1; } } } I guess what am i asking is what controls what get pass to comparator function, can i get an Map.Entry pass to comparator?

    Read the article

  • Comparing all values within a List against each other

    - by Kave
    I am a bit stuck here and can't think further. public struct CandidateDetail { public int CellX { get; set; } public int CellY { get; set; } public int CellId { get; set; } } var dic = new Dictionary<int, List<CandidateDetail>>(); How can I compare each CandidateDetail item against other CandidateDetail items within the same dictionary in the most efficient way? Example: There are three keys for the dictionary: 5, 6 and 1. Therefore we have three entries. now each of these key entries would have a List associated with. In this case let say each of these three numbers has exactly two CandidateDetails items within the list associated to each key. This means in other words we have two 5, two 6 and two 1 in different or in the same cells. I would like to know: if[5].1stItem.CellId == [6].1stItem.CellId = we got a hit. That means we have a 5 and a 6 within the same Cell if[5].2ndItem.CellId == [6].2ndItem.CellId = perfect. We found out that the other 5 and 6 are together within a different cell. if[1].1stItem.CellId == ... Now I need to check the 1 also against the other 5 and 6 to see if the one exists within the previous same two cells or not. Could a Linq expression help perhaps? I am quite stuck here... I don't know...Maybe I am taking the wrong approach. I am trying to solve the "Hidden pair" of the game Sudoku. :) http://www.sudokusolver.eu/ExplainSolveMethodD.aspx Many Thanks, Kave

    Read the article

  • Template Sort In C++

    - by wdow88
    Hey all, I'm trying to write a sort function but am having trouble figuring out how to initialize a value, and making this function work as a generic template. The sort works by: Find a pair =(ii,jj)= with a minimum value = ii+jj = such at A[ii]A[jj] If such a pair exists, then swap A[ii] and A[jj] else break; The function I have written is as follows: template <typename T> void sort(T *A, int size) { T min =453; T temp=0; bool swapper = true; while(swapper) { swapper = false; int index1 = 0, index2 = 0; for (int ii = 0; ii < size-1; ii++){ for (int jj = ii + 1; jj < size; jj++){ if((min >= (A[ii]+A[jj])) && (A[ii] > A[jj])){ min = (A[ii]+A[jj]); index1 = ii; index2 = jj; swapper = true; } } } if (!swapper) return; else { temp = A[index1]; A[index1] = A[index2]; A[index2] = temp; sort(A,size); } } } This function will successfully sort an array of integers, but not an array of chars. I do not know how to properly initialize the min value for the start of the comparison. I tried initializing the value by simply adding the first two elements of the array together (min = A[0] + A[1]), but it looks to me like for this algorithm it will fail. I know this is sort of a strange type of sort, but it is practice for a test, so thanks for any input.

    Read the article

  • Batch Image compression tool for optimizing thousands of images

    - by Daniel Magliola
    Hi all, I'm maintaining a site that has thousands of images that have not been compressed nearly enough. The homepage weighs in at 1.5 Mb currently, and it could easily be way less that half that. I'm looking for some kind of tool that'll take a folder full of JPG pictures and will recompress them to their "optimal" compression value. Obviously, "optimal lossy compression setting" is an oxymoron, but I'm thinking maybe a tool that'll try different levels and compare the outputs to the input, and choose a "sweet spot" between size and destruction? Or even try whether PNG is a better option, many times it is, for "drawing" type stuff. Does anyone of you know any such tool? I'd have lots of fun coding one, but I bet someone already did and will save me 2 days. Alternatively, of course, anything that'll take all pictures in a folder and recompress them with a fixed quality level (say, 40) will also work, it'll just not make my inner nerd as happy, but it'll solve my problem just fine. (Ideally something that can run on Windows, ideally from the command line) Thank you!

    Read the article

  • "string" != "string"

    - by Misiur
    Hi. I'm doing some kind of own templates system. I want to change <title>{site('title')}</title> Into function "site" execution with parameter "title". Here's private function replaceFunc($subject) { foreach($this->func as $t) { $args = explode(", ", preg_replace('/\{'.$t.'\(\'([a-zA-Z,]+)\'\)\}/', '$1', $subject)); $subject = preg_replace('/\{'.$t.'\([a-zA-Z,\']+\)\}/', call_user_func_array($t, $args), $subject); } return $subject; } Here's site: function site($what) { global $db; $s = $db->askSingle("SELECT * FROM ".DB_PREFIX."config"); switch($what) { case 'title': return 'Title of page'; break; case 'version': return $s->version; break; case 'themeDir': return 'lolmao'; break; default: return false; } } I've tried to compare $what (which is for this case "title") with "title". MD5 are different. strcmp gives -1, "==", and "===" return false. What is wrong? ($what type is string. You can't change call_user_func_array into call_user_func, because later I'll be using multiple arguments)

    Read the article

  • Best practice PHP Form Action

    - by Rob
    Hi there i've built a new script (from scratch not a CMS) and i've done alot of work on reducing memory usage and the time it takes for the page to be displayed (caching HTML etc) There's one thing that i'm not sure about though. Take a simple example of an article with a comments section. If the comment form posts to another page that then redirects back to the article page I won't have the problem of people clicking refresh and resending the information. However if I do it that way, I have to load up my script twice use twice as much memory and it takes twice as long whilst i'm still only displaying the page once. Here's an example from my load log. The first load of the article is from the cache, the second rebuilds the page after the comment is posted. Example 1 0 queries using 650856 bytes of memory in 0.018667 - domain.com/article/1/my_article.html 9 queries using 1325723 bytes of memory in 0.075825 - domain.com/article/1/my_article/newcomment.html 0 queries using 650856 bytes of memory in 0.029449 - domain.com/article/1/my_article.html Example 2 0 queries using 650856 bytes of memory in 0.023526 - domain.com/article/1/my_article.html 9 queries using 1659096 bytes of memory in 0.060032 - domain.com/article/1/my_article.html Obviously the time fluctuates so you can't really compare that. But as you can see with the first method I use more memory and it takes longer to load. BUT the first method avoides the refresh problem. Does anyone have any suggestions for the best approach or for alternative ways to avoid the extra load (admittadely minimal but i'd still like to avoid it) whilst also avoiding the refresh problem?

    Read the article

  • NULL-keys for key/value table

    - by user72185
    (Using Oracle) I have a table with key/value pairs like this: create table MESSAGE_INDEX ( KEY VARCHAR2(256) not null, VALUE VARCHAR2(4000) not null, MESSAGE_ID NUMBER not null ) I now want to find all the messages where key = 'someKey' and value is 'val1', 'val2' or 'val3' - OR value is null in which case there will be no entry in the table at all. This is to save space; there would be a large number of keys with null values if I stored them all. I think this works: SELECT message_id FROM message_index idx WHERE ((key = 'someKey' AND value IN ('val1', 'val2', 'val3')) OR NOT EXISTS (SELECT 1 FROM message_index WHERE key = 'someKey' AND idx.message_id = message_id)) But is is extremely slow. Takes 8 seconds with 700K records in message_index and there will be many more records and more search criteria when moving outside of my test environment. Primary key is key, value, message_id: add constraint PK_KEY_VALUE primary key (KEY, VALUE, MESSAGE_ID) And I added another index for message_id, to speed up searching for missing keys: create index IDX_MESSAGE_ID on MESSAGE_INDEX (MESSAGE_ID) I will be doing several of these key/value lookups in every search, not just one as shown above. So far I am doing them nested, where output id's of one level is the input to the next. E.g.: SELECT message_id from message_index WHERE (key/value compare) AND message_id IN ( SELECT ... and so on ) What can I do to speed this up?

    Read the article

  • Proper use of HttpRequestInterceptor and CredentialsProvider in doing preemptive authentication with

    - by Preston
    I'm writing an application in Android that consumes some REST services I've created. These web services aren't issuing a standard Apache Basic challenge / response. Instead in the server-side code I'm wanting to interrogate the username and password from the HTTP(S) request and compare it against a database user to make sure they can run that service. I'm using HttpClient to do this and I have the credentials stored on the client after the initial login (at least that's how I see this working). So here is where I'm stuck. Preemptive authenticate under HttpClient requires you to setup an interceptor as a static member. This is the example Apache Components uses. HttpRequestInterceptor preemptiveAuth = new HttpRequestInterceptor() { @Override public void process( final HttpRequest request, final HttpContext context) throws HttpException, IOException { AuthState authState = (AuthState) context.getAttribute(ClientContext.TARGET_AUTH_STATE); CredentialsProvider credsProvider = (CredentialsProvider) context.getAttribute( ClientContext.CREDS_PROVIDER); HttpHost targetHost = (HttpHost) context.getAttribute(ExecutionContext.HTTP_TARGET_HOST); if (authState.getAuthScheme() == null) { AuthScope authScope = new AuthScope(targetHost.getHostName(), targetHost.getPort()); Credentials creds = credsProvider.getCredentials(authScope); if (creds != null) { authState.setAuthScheme(new BasicScheme()); authState.setCredentials(creds); } } } }; So the question would be this. What would the proper use of this be? Would I spin this up as part of the application when the application starts? Pulling the username and password out of memory and then using them to create this CredentialsProvider which is then utilized by the HttpRequestInterceptor? Or is there a way to do this more dynamically?

    Read the article

  • (x86) Assembler Optimization

    - by Pindatjuh
    I'm building a compiler/assembler/linker in Java for the x86-32 (IA32) processor targeting Windows. High-level concepts of a "language" (in essential a Java API for creating executables) are translated into opcodes, which then are wrapped and outputted to a file. The translation process has several phases, one is the translation between languages: the highest-level code is translated into the medium-level code which is then translated into the lowest-level code (probably more than 3 levels). My problem is the following; if I have higher-level code (X and Y) translated to lower-level code (x, y, U and V), then an example of such a translation is, in pseudo-code: x + U(f) // generated by X + V(f) + y // generated by Y (An easy example) where V is the opposite of U (compare with a stack push as U and a pop as V). This needs to be 'optimized' into: x + y (essentially removing the "useless" code) My idea was to use regular expressions. For the above case, it'll be a regular expression looking like this: x:(U(x)+V(x)):null, meaning for all x find U(x) followed by V(x) and replace by null. Imagine more complex regular expressions, for more complex optimizations. This should work on all levels. What do you suggest? What would be a good approach to optimize in these situations?

    Read the article

  • xstream and ibm j9 sdk incompatibilities on linux

    - by Yoni
    I encountered an incompatibility with xstream and IBM J9 jdk (the 32bits version). Everything worked fine when I used sun jdk but fails on IBM jdk (on linux only. on windows it's ok with both jdks). When debugging, the error appears to be that xstream uses a java.util.TreeSet internally but the set's iterator returns elements in the wrong order (I know this sounds very strange, but this is the behavior that I saw). Googling for related bugs didn't give any meaningful results I tried upgrading pretty much any component possible but no luck. I tried the following configurations: ibm jdk 1.6 SR 7 (bundled with WebSphere 7.0.0.9), xstream 1.2.2 ibm jdk 1.6 SR 8, xstream 1.2.2 ibm jdk 1.6 SR 8, xstream 1.3.1 (I tried those both with tomcat and with WebSphere server, so actually there are 6 configurations using IBM jdk). The code in question is in class com.thoughtworks.xstream.core.DefaultConverterLookup, around line 44. It uses an iterator from class com.thoughtworks.xstream.core.util.PrioritizedList, which uses a custom comparator, but all the comparator does is compare integers (the priorities). Has anyone seen this before? Any idea what can I do or change?

    Read the article

  • handling large arrays with array_diff

    - by bigmac
    I have been trying to compare two arrays. Using array_intersect presents no problems. When using array_diff and arrays with ~5,000 values, it works. When I get to ~10,000 values, the script dies when I get to array_diff. Turning on error_reporting did not produce anything. I tried creating my own array_diff function: function manual_array_diff($arraya, $arrayb) { foreach ($arraya as $keya => $valuea) { if (in_array($valuea, $arrayb)) { unset($arraya[$keya]); } } return $arraya; } source: http://stackoverflow.com/questions/2479963/how-does-array-diff-work I would expect it to be less efficient that than the official array_diff, but it can handle arrays of ~10,000. Unfortunately, both array_diffs fail when I get to ~15,000. I tried the same code on a different machine and it runs fine, so it's not an issue with the code or PHP. There must be some limit set somewhere on that particular server. Any idea how I can get around that limit or alter it or just find out what it is?

    Read the article

  • ASP.NET MVC Authorize by Group

    - by Jimmo
    I have what seems like a common issue with SaaS applications, but have not seen this question on here anywhere. I am using ASP.NET MVC with Forms Authentication. I have implemented a custom membership provider to handle logic, but have one issue (perhaps the issue is in my mental picture of the system). As with many SaaS apps, Customers create accounts and use the app in a way that looks like they are the only ones present (they only see their items, users, etc.) In reality, there are generic controllers and views presenting data depending on their account. When calling something like ValidateUser, I have access to their affiliation in the User object - what I don't have is the context of the request to which to compare it. As an example, One company called ABC goes to abc.mysite.com Another company called XYZ goes to xyz.mysite.com When an ABC user calls http://abc.mysite.com/product/edit/12 I have an [Authorize] attribute on the Edit method in the ProductController to make sure he is signed in and has sufficient permission to do so. If that same ABC user tried to access http://xyz.mysite.com/product/edit/12 I would not want to validate him in the context of that call. In the ValidateUser of the MembershipProvider, I have the information about the user, but not about the request. I can tell that the user is from ABC, but I cannot tell that the request is for XYZ at that point in the code. How should I resolve this?

    Read the article

  • How to implement web cache: internal fragmentation VS external fragmentation

    - by Summer_More_More_Tea
    Hi there: I come up with this question when play with Firefox web cache: in which approach does the browser cache a response in limited disk space(take my configuration as an example, 50MB is the upper bound)? I think two ways can be employed. One is cache the total response object one by one, but this is inefficient and will introduce external fragmentation, thus the total cache space may not be fully used. The second is take the total space(50MB) as a consecutive file, splitting it into fixed-length slots; incoming response objects will also be treated blocks of data with the same length as the slots. We can fill slots until the whole file is run out of, then some displacement algorithm can be used to swap out the old cached objects. The latter approach will of course bing in internal fragmentation, but in my opinion is easier to implement and maintain than the first strategy. But when I enter Firefox's Cache directory, I find it (maybe) use a different method: a lot of varied-length files reside in that directory and all those files are filled with undisplayable characters. I don't but really want to know what mechanism that a commercial browser, e.g. Firefoix, employed to implement web cache. Regards.

    Read the article

< Previous Page | 176 177 178 179 180 181 182 183 184 185 186 187  | Next Page >