Search Results

Search found 5469 results on 219 pages for 'compare and swap'.

Page 176/219 | < Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >

  • C++ thread safety - exchange data between worker and controller

    - by peterchen
    I still feel a bit unsafe about the topic and hope you folks can help me - For passing data (configuration or results) between a worker thread polling something and a controlling thread interested in the most recent data, I've ended up using more or less the following pattern repeatedly: Mutex m; tData * stage; // temporary, accessed concurrently // send data, gives up ownership, receives old stage if any tData * Send(tData * newData) { ScopedLock lock(m); swap(newData, stage); return newData; } // receiving thread fetches latest data here tData * Fetch(tData * prev) { ScopedLock lock(m); if (stage != 0) { // ... release prev prev = stage; stage = 0; } return prev; // now current } Note: This is not supposed to be a full producer-consumer queue, only the msot recent data is relevant. Also, I've skimmed ressource management somewhat here. When necessary I'm using two such stages: one to send config changes to the worker, and for sending back results. Now, my questions assuming that ScopedLock implements a full memory barrier: do stage and/or workerData need to be volatile? is volatile necessary for tData members? can I use smart pointers instead of the raw pointers - say boost::shared_ptr? Anything else that can go wrong? I am basically trying to avoid "volatile infection" spreading into tData, and minimize lock contention (a lock free implementation seems possible, too). However, I'm not sure if this is the easiest solution. ScopedLock acts as a full memory barrier. Since all this is more or less platform dependent, let's say Visual C++ x86 or x64, though differences/notes for other platforms are welcome, too. (a prelimenary "thanks but" for recommending libraries such as Intel TBB - I am trying to understand the platform issues here)

    Read the article

  • Benchmarking a UDP server

    - by Nicolas
    I am refactoring a UDP listener from Java to C. It needs to handle between 1000 and 10000 UDP messages per second, with an average data length of around 60 bytes. There is no reply necessary. Data cannot be lost (Don't ask why UDP was decided). I fork off a process to deal with the incoming data so that I can recvfrom as quickly as possible - without filling up my kernel buffers. The child then handles the data received. In short, my algo is: Listen for data. When data is received, check for errors. Fork off a child. If I'm a child, do what I with the data and exit. If I'm a parent, reap any zombie children waitpid(-1, NULL, WNOHANG). Repeat. Firstly, any comments about the above? I'm creating the socket with socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP), binding with AF_INET and INADDR_ANY and recvfrom with no flags. Secondly, can anyone suggest something that I can use to test that this application (or at least the listener) can handle more messages than what I am expecting? Or, would I need to hack something together to do this. I'd guess the latter would be better, so that I can compare data that is generated versus data that is received. But, comments would be appreciated.

    Read the article

  • Rails 3 memory issue

    - by Erik
    Hello! I'm developing a new site based on Ruby on Rails 3 beta. I knew this might be a bad idea considering it's just beta, but I still thought it might work. Now though I'm having HUGE problems with Rails consuming huge ammounts of memory. For my application today it consumes about 10 mb per request and it doesn't seem to release it either. So I thought this might be because of bloat in my application and thus I created a test app just to compare. For my test app I just generated a model with a scaffold and then created about 20 records on this model. I then went to the index page and hit refresh and I could immediately see memory taking off! Less than my app but still about 1-3 mb per request. I'm working in OSX Leopard, with Ruby 1.8.7, Rails 3.0.0.beta and a SQLLite db for development. Does anyone recognize my problem? I would really appreciate some help here. :/ Thanks!

    Read the article

  • Removing padding from structure in kernel module

    - by dexkid
    I am compiling a kernel module, containing a structure of size 34, using the standard command. make -C /lib/modules/$(KVERSION)/build M=$(PWD) modules The sizeof(some_structure) is coming as 36 instead of 34 i.e. the compiler is padding the structure. How do I remove this padding? Running make V=1 shows the gcc compiler options passed as make -I../inc -C /lib/modules/2.6.29.4-167.fc11.i686.PAE/build M=/home/vishal/20100426_eth_vishal/organised_eth/src modules make[1]: Entering directory `/usr/src/kernels/2.6.29.4-167.fc11.i686.PAE' test -e include/linux/autoconf.h -a -e include/config/auto.conf || ( \ echo; \ echo " ERROR: Kernel configuration is invalid."; \ echo " include/linux/autoconf.h or include/config/auto.conf are missing."; \ echo " Run 'make oldconfig && make prepare' on kernel src to fix it."; \ echo; \ /bin/false) mkdir -p /home/vishal/20100426_eth_vishal/organised_eth/src/.tmp_versions ; rm -f /home/vishal/20100426_eth_vishal/organised_eth/src/.tmp_versions/* make -f scripts/Makefile.build obj=/home/vishal/20100426_eth_vishal/organised_eth/src gcc -Wp,-MD,/home/vishal/20100426_eth_vishal/organised_eth/src/.eth_main.o.d -nostdinc -isystem /usr/lib/gcc/i586-redhat-linux/4.4.0/include -Iinclude -I/usr/src/kernels/2.6.29.4-167.fc11.i686.PAE/arch/x86/include -include include/linux/autoconf.h -D__KERNEL__ -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -Werror-implicit-function-declaration -Os -m32 -msoft-float -mregparm=3 -freg-struct-return -mpreferred-stack-boundary=2 -march=i686 -mtune=generic -Wa,-mtune=generic32 -ffreestanding -DCONFIG_AS_CFI=1 -DCONFIG_AS_CFI_SIGNAL_FRAME=1 -pipe -Wno-sign-compare -fno-asynchronous-unwind-tables -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -Iarch/x86/include/asm/mach-generic -Iarch/x86/include/asm/mach-default -Wframe-larger-than=1024 -fno-stack-protector -fno-omit-frame-pointer -fno-optimize-sibling-calls -g -pg -Wdeclaration-after-statement -Wno-pointer-sign -fwrapv -fno-dwarf2-cfi-asm -DTX_DESCRIPTOR_IN_SYSTEM_MEMORY -DRX_DESCRIPTOR_IN_SYSTEM_MEMORY -DTX_BUFFER_IN_SYSTEM_MEMORY -DRX_BUFFER_IN_SYSTEM_MEMORY -DALTERNATE_DESCRIPTORS -DEXT_8_BYTE_DESCRIPTOR -O0 -Wall -DT_ETH_1588_051 -DALTERNATE_DESCRIPTORS -DEXT_8_BYTE_DESCRIPTOR -DNETHERNET_INTERRUPTS -DETH_IEEE1588_TESTS -DSNAPTYPSEL_TMSTRENA_TEVENTENA_TESTS -DT_ETH_1588_140_147 -DLOW_DEBUG_PRINTS -DMEDIUM_DEBUG_PRINTS -DHIGH_DEBUG_PRINTS -DMODULE -D"KBUILD_STR(s)=#s" -D"KBUILD_BASENAME=KBUILD_STR(eth_main)" -D"KBUILD_MODNAME=KBUILD_STR(conxt_eth)" -c -o /home/vishal/20100426_eth_vishal/organised_eth/src/eth_main.o /home/vishal/20100426_eth_vishal/organised_eth/src/eth_main.c

    Read the article

  • What makes these two R data frames not identical?

    - by Matt Parker
    UPDATE: I remembered dput() about the time Sharpie mentioned it. It's probably the row names. Back in a moment with an answer. I have two small data frames, this_tx and last_tx. They are, in every way that I can tell, completely identical. this_tx == last_tx results in a frame of identical dimensions, all TRUE. this_tx %in% last_tx, two TRUEs. Inspected visually, clearly identical. But when I call identical(this_tx, last_tx) I get a FALSE. Hilariously, even identical(str(this_tx), str(last_tx)) will return a TRUE. If I set this_tx <- last_tx, I'll get a TRUE. What is going on? I don't have the deepest understanding of R's internal mechanics, but I can't find a single difference between the two data frames. If it's relevant, the two variables in the frames are both factors - same levels, same numeric coding for the levels, both just subsets of the same original data frame. Converting them to character vectors doesn't help. Background (because I wouldn't mind help on this, either): I have records of drug treatments given to patients. Each treatment record essentially specifies a person and a date. A second table has a record for each drug and dose given during a particular treatment (usually, a few drugs are given each treatment). I'm trying to identify contiguous periods during which the person was taking the same combinations of drugs at the same doses. The best plan I've come up with is to check the treatments chronologically. If the combination of drugs and doses for treatment[i] is identical to the combination at treatment[i-1], then treatment[i] is a part of the same phase as treatment[i-1]. Of course, if I can't compare drug/dose combinations, that's right out.

    Read the article

  • Xunit: Perform all 'Assert'ions in one test method?

    - by Jörg Battermann
    Is it possible to tell xUnit.net to perform all e.g. Assert.True() in one test method? Basically in some of our use/testcases all assertions belong logically to one and the same 'scope' of tests and I have e.g. something like this: [Fact(DisplayName = "Tr-MissImpl")] public void MissingImplementationTest() { // parse export.xml file var exportXml = Libraries.Utilities.XML.GenericClassDeserializer.DeserializeXmlFile<Libraries.MedTrace.ExportXml>( ExportXmlFile); // compare parsed results with expected ones Assert.True(exportXml.ContainsRequirementKeyWithError("PERS_154163", "E0032A")); Assert.True(exportXml.ContainsRequirementKeyWithError("PERS_155763", "E0032A")); Assert.True(exportXml.ContainsRequirementKeyWithError("PERS_155931", "E0032A")); Assert.True(exportXml.ContainsRequirementKeyWithError("PERS_157145", "E0032A")); Assert.True(exportXml.ContainsRequirementKeyWithError("s_sw_ers_req_A", "E0032A")); Assert.True(exportXml.ContainsRequirementKeyWithError("s_sw_ers_req_C", "E0032A")); Assert.True(exportXml.ContainsRequirementKeyWithError("s_sw_ers_req_D", "E0032A")); } Now if e.g. the first Assert.True(...) fails, the other ones are not executed/checked. I'd rather not break these seven Assertions up into separate methods, since these really do belong together logically (the TC only is 'passed' entirely if all seven are passing all together).

    Read the article

  • How to detect hidden field tampering?

    - by Myron
    On a form of my web app, I've got a hidden field that I need to protect from tampering for security reasons. I'm trying to come up with a solution whereby I can detect if the value of the hidden field has been changed, and react appropriately (i.e. with a generic "Something went wrong, please try again" error message). The solution should be secure enough that brute force attacks are infeasible. I've got a basic solution that I think will work, but I'm not security expert and I may be totally missing something here. My idea is to render two hidden inputs: one named "important_value", containing the value I need to protect, and one named "important_value_hash" containing the SHA hash of the important value concatenated with a constant long random string (i.e. the same string will be used every time). When the form is submitted, the server will re-compute the SHA hash, and compare against the submitted value of important_value_hash. If they are not the same, the important_value has been tampered with. I could also concatenate additional values with the SHA's input string (maybe the user's IP address?), but I don't know if that really gains me anything. Will this be secure? Anyone have any insight into how it might be broken, and what could/should be done to improve it? Thanks!

    Read the article

  • Where are tables in Mnesia located?

    - by Sanoj
    I try to compare Mnesia with more traditional databases. As I understand it tables in Mnesia can be located to: ram_copies - tables are stored in RAM only, so no durability as in ACID. disc_copies - tables are located on disc and a copy is located in RAM, so the table can not be bigger than the available memory? disc_only_copies - tables are located to disc only, so no caching in memory and worse performance? And the size of the table are limited to the size of dets or the table has to be fragmented. So if I want the performance of doing reads from RAM and the durability of writes to disc, then the size of the tables are very limited compared to a traditional RDBMS like MySQL or PostgreSQL. I know that Mnesia aren't meant to replace traditional RDBMS:s, but can it be used as a big RDBMS or do I have to look for another database? The server I will use is a VPS with limited amount of memory, around 512MB, but I want good database performance. Are disc_copies and the other types of tables in Mnesia so limited as I have understood?

    Read the article

  • Persisting complex data between postbacks in ASP.NET MVC

    - by Robert Wagner
    I'm developing an ASP.NET MVC 2 application that connects to some services to do data retrieval and update. The services require that I provide the original entity along with the updated entity when updating data. This is so it can do change tracking and optimistic concurrency. The services cannot be changed. My problem is that I need to somehow store the original entity between postbacks. In WebForms, I would have used ViewState, but from what I have read, that is out for MVC. The original values do not have to be tamper proof as the services treat them as untrusted. The entities would be (max) 1k and it is an intranet app. The options I have come up are: Session - Ruled out - Store the entity in the Session, but I don't like this idea as there are no plans to share session between URL - Ruled out - Data is too big HiddenField - Store the serialized entity in a hidden field, perhaps with encryption/encoding HiddenVersion - The entities have a (SQL) version field on them, which I could put into a hidden field. Then on a save I get "original" entity from the services and compare the versions, doing my own optimistic concurrency. Cookies - Like 3 or 4, but using a cookie instead of a hidden field I'm leaning towards option 4, although 3 would be simpler. Are these valid options or am I going down the wrong track? Is there a better way of doing this?

    Read the article

  • Simulating pass by reference for an array in Java

    - by Leif Andersen
    I was wondering, in java, is it possible to in anyway, simulate pass by reference for an array? Yes, I know the language doesn't support it, but is there anyway I can do it. Say, for example, I want to create a method that reverses the order of all the elements in an array. (I know that this code snippet isn't the best example, as there is a better algorithms to do this, but this is a good example of the type of thing I want to do for more complex problems). Currently, I need to make a class like this: public static void reverse(Object[] arr) { Object[] tmpArr = new Object[arr.length]; count = arr.length - 1; for(Object i : arr) tmpArr[count--] = i; // I would like to do arr = tmpArr, but that will only make the shallow // reference tmpArr, I would like to actually change the pointer they passed in // Not just the values in the array, so I have to do this: count = arr.length - 1; for(Object i : tmpArr) arr[count--] = i; return; } Yes, I know that I could just swap the values until I get to the middle, and it would be much more efficient, but for other, more complex purposes, is there anyway that I can manipulate the actual pointer? Again, thank you.

    Read the article

  • Read a buffer of unknown size (Console input)

    - by Sanarothe
    Hi. I'm a little behind in my X86 Asm class, and the book is making me want to shoot myself in the face. The examples in the book are insufficient and, honestly, very frustrating because of their massive dependencies upon the author's link library, which I hate. I wanted to learn ASM, not how to call his freaking library, which calls more of his library. Anyway, I'm stuck on a lab that requires console input and output. So far, I've got this for my input: input PROC INVOKE ReadConsole, inputHandle, ADDR buffer, Buf - 2, ADDR bytesRead, 0 mov eax,OFFSET buffer Ret input EndP I need to use the input and output procedures multiple times, so I'm trying to make it abstract. I'm just not sure how to use the data that is set to eax here. My initial idea was to take that string array and manually crawl through it by adding 8 to the offset for each possible digit (Input is integer, and there's a little bit of processing) but this doesn't work out because I don't know how big the input actually is. So, how would you swap the string array into an integer that could be used? Full code: (Haven't done the integer logic or the instruction string output because I'm stuck here.) include c:/irvine/irvine32.inc .data inputHandle HANDLE ? outputHandle HANDLE ? buffer BYTE BufSize DUP(?),0,0 bytesRead DWORD ? str1 BYTE "Enter an integer:",0Dh, 0Ah str2 BYTE "Enter another integer:",0Dh, 0Ah str3 BYTE "The higher of the two integers is: " int1 WORD ? int2 WORD ? int3 WORD ? Buf = 80 .code main PROC call handle push str1 call output call input push str2 call output call input push str3 call output call input main EndP larger PROC Ret larger EndP output PROC INVOKE WriteConsole Ret output EndP handle PROC USES eax INVOKE GetStdHandle, STD_INPUT_HANDLE mov inputHandle,eax INVOKE GetStdHandle, STD_INPUT_HANDLE mov outputHandle,eax Ret handle EndP input PROC INVOKE ReadConsole, inputHandle, ADDR buffer, Buf - 2, ADDR bytesRead, 0 mov eax,OFFSET buffer Ret input EndP END main

    Read the article

  • Database warehoue design: fact tables and dimension tables

    - by morpheous
    I am building a poor man's data warehouse using a RDBMS. I have identified the key 'attributes' to be recorded as: sex (true/false) demographic classification (A, B, C etc) place of birth date of birth weight (recorded daily): The fact that is being recorded My requirements are to be able to run 'OLAP' queries that allow me to: 'slice and dice' 'drill up/down' the data and generally, be able to view the data from different perspectives After reading up on this topic area, the general consensus seems to be that this is best implemented using dimension tables rather than normalized tables. Assuming that this assertion is true (i.e. the solution is best implemented using fact and dimension tables), I would like to see some help in the design of these tables. 'Natural' (or obvious) dimensions are: Date dimension Geographical location Which have hierarchical attributes. However, I am struggling with how to model the following fields: sex (true/false) demographic classification (A, B, C etc) The reason I am struggling with these fields is that: They have no obvious hierarchical attributes which will aid aggregation (AFAIA) - which suggest they should be in a fact table They are mostly static or very rarely change - which suggests they should be in a dimension table. Maybe the heuristic I am using above is too crude? I will give some examples on the type of analysis I would like to carryout on the data warehouse - hopefully that will clarify things further. I would like to aggregate and analyze the data by sex and demographic classification - e.g. answer questions like: How does male and female weights compare across different demographic classifications? Which demographic classification (male AND female), show the most increase in weight this quarter. etc. Can anyone clarify whether sex and demographic classification are part of the fact table, or whether they are (as I suspect) dimension tables.? Also assuming they are dimension tables, could someone elaborate on the table structures (i.e. the fields)? The 'obvious' schema: CREATE TABLE sex_type (is_male int); CREATE TABLE demographic_category (id int, name varchar(4)); may not be the correct one.

    Read the article

  • RTF template coding, XSLT coding

    - by sujith
    I have the below requirement <data> <dataset1> <number>1</number> <name>red</name> <number>2</number> <name>Yellow</name> <number>3</number> <name>black</name> <number>4</number> <name>Violet</name> </dataset1> <dataset2> <index>1</index> <index>2</index> <index>3</index> <index>4</index> </dataset2> </data> I need to loop through dataset2 take the index value, compare it with the value of number tag in dataset1. If a match occurs then display value of corresponding name tag. I need to get the output in rtf format. Please give BI tags or relevent xslt code to do the same. Thanks in advance.

    Read the article

  • Database migration pattern for Java?

    - by Eno
    Im working on some database migration code in Java. Im also using a factory pattern so I can use different kinds of databases. And each kind of database im using implements a common interface. What I would like to do is have a migration check that is internal to the class and runs some database schema update code automatically. The actual update is pretty straight forward (I check schema version in a table and compare against a constant in my app to decide whether to migrate or not and between which versions of schema). To make this automatic I was thinking the test should live inside (or be called from) the constructor. OK, fair enough, that's simple enough. My problem is that I dont want the test to run every single time I instantiate a database object (it runs a query so having it run on every construction is not efficient). So maybe this should be a class static method? I guess my question is, what is a good design pattern for this type of problem? There ought to be a clean way to ensure the migration test runs only once OR is super-efficient.

    Read the article

  • Autodetect Presence of CSV Headers in a File

    - by banzaimonkey
    Short question: How do I automatically detect whether a CSV file has headers in the first row? Details: I've written a small CSV parsing engine that places the data into an object that I can access as (approximately) an in-memory database. The original code was written to parse third-party CSV with a predictable format, but I'd like to be able to use this code more generally. I'm trying to figure out a reliable way to automatically detect the presence of CSV headers, so the script can decide whether to use the first row of the CSV file as keys / column names or start parsing data immediately. Since all I need is a boolean test, I could easily specify an argument after inspecting the CSV file myself, but I'd rather not have to (go go automation). I imagine I'd have to parse the first 3 to ? rows of the CSV file and look for a pattern of some sort to compare against the headers. I'm having nightmares of three particularly bad cases in which: The headers include numeric data for some reason The first few rows (or large portions of the CSV) are null There headers and data look too similar to tell them apart If I can get a "best guess" and have the parser fail with an error or spit out a warning if it can't decide, that's OK. If this is something that's going to be tremendously expensive in terms of time or computation (and take more time than it's supposed to save me) I'll happily scrap the idea and go back to working on "important things". I'm working with PHP, but this strikes me as more of an algorithmic / computational question than something that's implementation-specific. If there's a simple algorithm I can use, great. If you can point me to some relevant theory / discussion, that'd be great, too. If there's a giant library that does natural language processing or 300 different kinds of parsing, I'm not interested.

    Read the article

  • Simulating pass by reference for an array reference (i.e. a reference to a reference) in Java

    - by Leif Andersen
    I was wondering, in java, is it possible to in anyway, simulate pass by reference for an array? Yes, I know the language doesn't support it, but is there anyway I can do it. Say, for example, I want to create a method that reverses the order of all the elements in an array. (I know that this code snippet isn't the best example, as there is a better algorithms to do this, but this is a good example of the type of thing I want to do for more complex problems). Currently, I need to make a class like this: public static void reverse(Object[] arr) { Object[] tmpArr = new Object[arr.length]; count = arr.length - 1; for(Object i : arr) tmpArr[count--] = i; // I would like to do arr = tmpArr, but that will only make the shallow // reference tmpArr, I would like to actually change the pointer they passed in // Not just the values in the array, so I have to do this: count = arr.length - 1; for(Object i : tmpArr) arr[count--] = i; return; } Yes, I know that I could just swap the values until I get to the middle, and it would be much more efficient, but for other, more complex purposes, is there anyway that I can manipulate the actual pointer? Again, thank you.

    Read the article

  • Can't operator == be applied to generic types in C#?

    - by Hosam Aly
    According to the documentation of the == operator in MSDN, For predefined value types, the equality operator (==) returns true if the values of its operands are equal, false otherwise. For reference types other than string, == returns true if its two operands refer to the same object. For the string type, == compares the values of the strings. User-defined value types can overload the == operator (see operator). So can user-defined reference types, although by default == behaves as described above for both predefined and user-defined reference types. So why does this code snippet fail to compile? void Compare<T>(T x, T y) { return x == y; } I get the error Operator '==' cannot be applied to operands of type 'T' and 'T'. I wonder why, since as far as I understand the == operator is predefined for all types? Edit: Thanks everybody. I didn't notice at first that the statement was about reference types only. I also thought that bit-by-bit comparison is provided for all value types, which I now know is not correct. But, in case I'm using a reference type, would the the == operator use the predefined reference comparison, or would it use the overloaded version of the operator if a type defined one? Edit 2: Through trial and error, we learned that the == operator will use the predefined reference comparison when using an unrestricted generic type. Actually, the compiler will use the best method it can find for the restricted type argument, but will look no further. For example, the code below will always print true, even when Test.test<B>(new B(), new B()) is called: class A { public static bool operator==(A x, A y) { return true; } } class B : A { public static bool operator==(B x, B y) { return false; } } class Test { void test<T>(T a, T b) where T : A { Console.WriteLine(a == b); } }

    Read the article

  • How to deal with recursive dependencies between static libraries using the binutils linker?

    - by Jack Lloyd
    I'm porting an existing system from Windows to Linux. The build is structured with multiple static libraries. I ran into a linking error where a symbol (defined in libA) could not be found in an object from libB. The linker line looked like g++ test_obj.o -lA -lB -o test The problem of course being that by the time the linker finds it needs the symbol from libA, it has already passed it by, and does not rescan, so it simply errors out even though the symbol is there for the taking. My initial idea was of course to simply swap the link (to -lB -lA) so that libA is scanned afterwards, and any symbols missing from libB that are in libA are picked up. But then I find there is actually a recursive dependency between libA and libB! I'm assuming the Visual C++ linker handles this in some way (does it rescan by default?). Ways of dealing with this I've considered: Use shared objects. Unfortunately this is undesirable from the perspective of requiring PIC compliation (this is performance sensitive code and losing %ebx to hold the GOT would really hurt), and shared objects aren't needed. Build one mega ar of all of the objects, avoiding the problem. Restructure the code to avoid the recursive dependency (which is obviously the Right Thing to do, but I'm trying to do this port with minimal changes). Do you have other ideas to deal with this? Is there some way I can convince the binutils linker to perform rescans of libraries it has already looked at when it is missing a symbol?

    Read the article

  • Reordering arguments using recursion (pro, cons, alternatives)

    - by polygenelubricants
    I find that I often make a recursive call just to reorder arguments. For example, here's my solution for endOther from codingbat.com: Given two strings, return true if either of the strings appears at the very end of the other string, ignoring upper/lower case differences (in other words, the computation should not be "case sensitive"). Note: str.toLowerCase() returns the lowercase version of a string. public boolean endOther(String a, String b) { return a.length() < b.length() ? endOther(b, a) : a.toLowerCase().endsWith(b.toLowerCase()); } I'm very comfortable with recursions, but I can certainly understand why some perhaps would object to it. There are two obvious alternatives to this recursion technique: Swap a and b traditionally public boolean endOther(String a, String b) { if (a.length() < b.length()) { String t = a; a = b; b = t; } return a.toLowerCase().endsWith(b.toLowerCase()); } Not convenient in a language like Java that doesn't pass by reference Lots of code just to do a simple operation An extra if statement breaks the "flow" Repeat code public boolean endOther(String a, String b) { return (a.length() < b.length()) ? b.toLowerCase().endsWith(a.toLowerCase()) : a.toLowerCase().endsWith(b.toLowerCase()); } Explicit symmetry may be a nice thing (or not?) Bad idea unless the repeated code is very simple ...though in this case you can get rid of the ternary and just || the two expressions So my questions are: Is there a name for these 3 techniques? (Are there more?) Is there a name for what they achieve? (e.g. "parameter normalization", perhaps?) Are there official recommendations on which technique to use (when)? What are other pros/cons that I may have missed?

    Read the article

  • Apple Script error: Can't get Folder

    - by Jonathan Hirsch
    I started using apple script today, because I wanted to be able to automatically send files to someone when they go into a specific folder. I first tried with automator, but the files were never attached into emails, so i figured i could try with apple script. At first, I tried simply create an email with an attached file. The email was created, but the file was not attached. After a while, I stumbled onto this post that was trying to list files from a folder, so I tried that just to compare it to my code. It gave me an error telling me it is impossible to list files from that folder. So I tried setting a path to a specific folder, and I got an error saying the path can't be made. This is the code I used for the last part: tell application "Finder" set folderPath to folder "Macintosh HD: Users:Jonathan:Desktop:Send_Mail" set fileList to name of every file in folderPath end tell and this is the error I got. error "Finder got an error: Can’t get folder \"Macintosh HD: Users:Jonathan:Desktop:Send_Mail\"." number -1728 from folder "Macintosh HD: Users:Jonathan:Desktop:Send_Mail". I later tried with another folder, and I always get this error, even when using the Users folder for example. Any Suggestions? thanks

    Read the article

  • Java SortedMap to Scala TreeMap

    - by Dave
    I'm having trouble converting a java SortedMap into a scala TreeMap. The SortedMap comes from deserialization and needs to be converted into a scala structure before being used. Some background, for the curious, is that the serialized structure is written through XStream and on desializing I register a converter that says anything that can be assigned to SortedMap[Comparable[_],_] should be given to me. So my convert method gets called and is given an Object that I can safely cast because I know it's of type SortedMap[Comparable[_],_]. That's where it gets interesting. Here's some sample code that might help explain it. // a conversion from comparable to ordering scala> implicit def comparable2ordering[A <: Comparable[A]](x: A): Ordering[A] = new Ordering[A] { | def compare(x: A, y: A) = x.compareTo(y) | } comparable2ordering: [A <: java.lang.Comparable[A]](x: A)Ordering[A] // jm is how I see the map in the converter. Just as an object. I know the key // is of type Comparable[_] scala> val jm : Object = new java.util.TreeMap[Comparable[_], String]() jm: java.lang.Object = {} // It's safe to cast as the converter only gets called for SortedMap[Comparable[_],_] scala> val b = jm.asInstanceOf[java.util.SortedMap[Comparable[_],_]] b: java.util.SortedMap[java.lang.Comparable[_], _] = {} // Now I want to convert this to a tree map scala> collection.immutable.TreeMap() ++ (for(k <- b.keySet) yield { (k, b.get(k)) }) <console>:15: error: diverging implicit expansion for type Ordering[A] starting with method Tuple9 in object Ordering collection.immutable.TreeMap() ++ (for(k <- b.keySet) yield { (k, b.get(k)) })

    Read the article

  • Daylight Savings Handling in DateDiff() in MS Access?

    - by PowerUser
    I am fully aware of DateDiff()'s inability to handle daylight savings issues. Since I often use it to compare the number of hours or days between 2 datetimes several months apart, I need to write up a solution to handle DST. This is what I came up with, a function that first subtracts 60 minutes from a datetime value if it falls within the date ranges specified in a local table (LU_DST). Thus, the usage would be: datediff("n",Conv_DST_to_Local([date1]),Conv_DST_to_Local([date2])) My question is: Is there a better way to handle this? I'm going to make a wild guess that I'm not the first person with this question. This seems like the kind of thing that should have been added to one of the core reference libraries. Is there a way for me to access my system clock to ask it if DST was in effect at a certain date & time? Function Conv_DST_to_Local(X As Date) As Date Dim rst As DAO.Recordset Set rst = CurrentDb.OpenRecordset("LU_DST") Conv_DST_to_Local = X While rst.EOF = False If X > rst.Fields(0) And X < rst.Fields(1) Then Conv_DST_to_Local = DateAdd("n", -60, X) rst.MoveNext Wend End Function Notes I have visited and imported the BAS file of http://www.cpearson.com/excel/TimeZoneAndDaylightTime.aspx. I spent at least an hour by now reading through it and, while it may do its job well, I can't figure out how to modify it to my needs. But if you have an answer using his data structures, I'll take a look. Timezones are not an issue since this is all local time.

    Read the article

  • "Public" nested classes or not

    - by Frederick
    Suppose I have a class 'Application'. In order to be initialised it takes certain settings in the constructor. Let's also assume that the number of settings is so many that it's compelling to place them in a class of their own. Compare the following two implementations of this scenario. Implementation 1: class Application { Application(ApplicationSettings settings) { //Do initialisation here } } class ApplicationSettings { //Settings related methods and properties here } Implementation 2: class Application { Application(Application.Settings settings) { //Do initialisation here } class Settings { //Settings related methods and properties here } } To me, the second approach is very much preferable. It is more readable because it strongly emphasises the relation between the two classes. When I write code to instantiate Application class anywhere, the second approach is going to look prettier. Now just imagine the Settings class itself in turn had some similarly "related" class and that class in turn did so too. Go only three such levels and the class naming gets out out of hand in the 'non-nested' case. If you nest, however, things still stay elegant. Despite the above, I've read people saying on StackOverflow that nested classes are justified only if they're not visible to the outside world; that is if they are used only for the internal implementation of the containing class. The commonly cited objection is bloating the size of containing class's source file, but partial classes is the perfect solution for that problem. My question is, why are we wary of the "publicly exposed" use of nested classes? Are there any other arguments against such use?

    Read the article

  • How to connect a new query script with SSMS add-in?

    - by squillman
    I'm trying to create a SSMS add-in. One of the things I want to do is to create a new query window and programatically connect it to a server instance (in the context of a SQL Login). I can create the new query script window just fine but I can't find how to connect it without first manually connecting to something else (like the Object Explorer). So in other words, if I connect Obect Explorer to a SQL instance manually and then execute the method of my add-in that creates the query window I can connect it using this code: ServiceCache.ScriptFactory.CreateNewBlankScript( Editors.ScriptType.Sql, ServiceCache.ScriptFactory.CurrentlyActiveWndConnectionInfo.UIConnectionInfo, null); But I don't want to rely on CurrentlyActiveWndConnectionInfo.UIConnectionInfo for the connection. I want to set a SQL Login username and password programatically. Does anyone have any ideas? EDIT: I've managed to get the query window connected by setting the last parameter to an instance of System.Data.SqlClient.SqlConnection. However, the connection uses the context of the last login that was connected instead of what I'm trying to set programatically. That is, the user it connects as is the one selected in the Connection Dialog that you get when you click the New Query button and don't have an Object Explorer connected. EDIT2: I'm writing (or hoping to write) an add-in to automatically send a SQL statement and the execution results to our case-tracking system when run against our production servers. One thought I had was to remove write permissions and assign logins through this add-in which will also force the user to enter a case # canceling the statement if it's not there. Another thought I've just had is to inspect the server name in ServiceCache.ScriptFactory.CurrentlyActiveWndConnectionInfo.UIConnectionInfo and compare it to our list of production servers. If it matches and there's no case # then cancel the query.

    Read the article

  • Shared value in parallel python

    - by Jonathan
    Hey all- I'm using ParallelPython to develop a performance-critical script. I'd like to share one value between the 8 processes running on the system. Please excuse the trivial example but this illustrates my question. def findMin(listOfElements): for el in listOfElements: if el < min: min = el import pp min = 0 myList = range(100000) job_server = pp.Server() f1 = job_server.submit(findMin, myList[0:25000]) f2 = job_server.submit(findMin, myList[25000:50000]) f3 = job_server.submit(findMin, myList[50000:75000]) f4 = job_server.submit(findMin, myList[75000:100000]) The pp docs don't seem to describe a way to share data across processes. Is it possible? If so, is there a standard locking mechanism (like in the threading module) to confirm that only one update is done at a time? l = Lock() if(el < min): l.acquire if(el < min): min = el l.release I understand I could keep a local min and compare the 4 in the main thread once returned, but by sharing the value I can do some better pruning of my BFS binary tree and potentially save a lot of loop iterations. Thanks- Jonathan

    Read the article

< Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >