Search Results

Search found 9254 results on 371 pages for 'approach'.

Page 81/371 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • Exporting de-aggregated data

    - by Ben
    I'm currently working on a data export feature for a survey application. We are using SQL2k8. We store data in a normalized format: QuestionId, RespondentId, Answer. We have a couple other tables that define what the question text is for the QuestionId and demographics for the RespondentId... Currently I'm using some dynamic SQL to generate a pivot that joins the question table to the answer table and creates an export, its working... The problem is that it seems slow and we don't have that much data (less than 50k respondents). Right now I'm thinking "why am I 'paying' to de-aggregate the data for each query? Why don't I cache that?" The data being exported is based on dynamic criteria. It could be "give me respondents that completed on x date (or range)" or "people that like blue", etc. Because of that, I think I have to cache at the respondent level, find out what respondents are being exported and then select their combined cached de-aggregated data. To me the quick and dirty fix is a totally flat table, RespondentId, Question1, Question2, etc. The problem is, we have multiple clients and that doesn't scale AND I don't want to have to maintain the flattened table as the survey changes. So I'm thinking about putting an XML column on the respondent table and caching the results of a SELECT * FROM Data FOR XML AUTO WHERE RespondentId = x. With that in place, I would then be able to get my export with filtering and XML calls into the XML column. What are you doing to export aggregated data in a flattened format (CSV, Excel, etc)? Does this approach seem ok? I worry about the cost of XML functions on larger result sets (think SELECT RespondentId, XmlCol.value('//data/question_1', 'nvarchar(50)') AS [Why is there air?], XmlCol.RinseAndRepeat)... Is there a better technology/approach for this? Thanks!

    Read the article

  • Efficient way to copy a collection of Nodes, treat them, and then serialize?

    - by Danjah
    Hi all, I initially thought a regex to remove YUI3 classNames (or whole class attributes) and id attributes from a serialized DOM string was a sound enough approach - but now I'm not sure, given various warnings about using regex on HTML. I'm toying with the idea of making a copy of the DOM structure in question, performing: var nodeStructure = Y.one('#wrap').all('*'); // A YUI3 NodeList // Remove unwanted classNames.. I'd need to maintain a list of them to remove :/ nodeStructure.removeClass('unwantedClassName'); and then: // I believe this can be done on a NodeList collection... nodeStructure.removeAttribute('id'); I'm not quite sure about what I'd need to do to 'copy' a collection of Nodes anyway, as I don't actually want to do the above to my living markup, as its only being saved - not 'closed' or 'exited', a user could continue to change the markup, and then save again. The above doesn't make a copy, I know. Is this efficient? Is there a better way to 'sanitize' my live markup of framework additions to the DOM (and maybe other things too at a later point), before saving it as a string? If it is a good approach, what's a safe way to go about copying my collection of Nodes for safe cleaning? Thanks! d

    Read the article

  • Does C# allow method overloading, PHP style (__call)?

    - by mr.b
    In PHP, there is a special method named __call($calledMethodName, $arguments), which allows class to catch calls to non-existing methods, and do something about it. Since most of classic languages are strongly typed, compiler won't allow calling a method that does not exist, I'm clear with that part. What I want to accomplish (and I figured this is how I would do it in PHP, but C# is something else) is to proxy calls to a class methods and log each of these calls. Right now, I have code similar to this: class ProxyClass { static logger; public AnotherClass inner { get; private set; } public ProxyClass() { inner = new AnotherClass(); } } class AnotherClass { public void A() {} public void B() {} public void C() {} // ... } // meanwhile, in happyCodeLandia... ProxyClass pc = new ProxyClass(); pc.inner.A(); pc.inner.B(); // ... So, how can I proxy calls to an object instance in extensible way? Extensible, meaning that I don't have to modify ProxyClass whenever AnotherClass changes. In my case, AnotherClass can have any number of methods, so it wouldn't be appropriate to overload or wrap all methods to add logging. I am aware that this might not be the best approach for this kind of problem, so if anyone has idea what approach to use, shoot. Thanks!

    Read the article

  • Is it better to use a relational database or document-based database for an app like Wufoo?

    - by mboyle
    I'm working on an application that's similar to Wufoo in that it allows our users to create their own databases and collect/present records with auto generated forms and views. Since every user is creating a different schema (one user might have a database of their baseball card collection, another might have a database of their recipes) our current approach is using MySQL to create separate databases for every user with its own tables. So in other words, the databases our MySQL server contains look like: main-web-app-db (our web app containing tables for users account info, billing, etc) user_1_db (baseball_cards_table) user_2_db (recipes_table) .... And so on. If a user wants to set up a new database to keep track of their DVD collection, we'd do a "create database ..." with "create table ...". If they enter some data in and then decide they want to change a column we'd do an "alter table ....". Now, the further along I get with building this out the more it seems like MySQL is poorly suited to handling this. 1) My first concern is that switching databases every request, first to our main app's database for authentication etc, and then to the user's personal database, is going to be inefficient. 2) The second concern I have is that there's going to be a limit to the number of databases a single MySQL server can host. Pretending for a moment this application had 500,000 user databases, is MySQL designed to operate this way? What if it were a million, or more? 3) Lastly, is this method going to be a nightmare to support and scale? I've never heard of MySQL being used in this way so I do worry about how this affects things like replication and other methods of scaling. To me, it seems like MySQL wasn't built to be used in this way but what do I know. I've been looking at document-based databases like MongoDB, CouchDB, and Redis as alternatives because it seems like a schema-less approach to this particular problem makes a lot of sense. Can anyone offer some advice on this?

    Read the article

  • URL development and mod_rewrite

    - by iRector
    My site is made-up of the main page, and multiple sub-directories, all under the same domain. My URLS are currently like .................| Ideal clean version: mysite.com mysite.com/?content=content1 ......................| mysite.com/content1/ mysite.com/?content=content2&page=4 ........| mysite.com/content2/4/ mysite.com/?content=content3 ......................| mysite.com/content3/ mysite.com/?content=content4 ......................| mysite.com/content4/ mysite.com/?content=article&id=34 ............| mysite.com/article/34/ Then the sub-directories are essentially the same: mysite.com/subdir, mysite.com/subdir2, mysite.com/subdir3, etc mysite.com/subdir/?content=content1 ...................| mysite.com/subdir/content1/ mysite.com/subdir/?content=content2&page=4 .....| mysite.com/subdir/content2/4/ mysite.com/subdir/?content=content3 ...................| mysite.com/subdir/content3/ mysite.com/subdir/?content=content4 ...................| mysite.com/subdir/content4/ mysite.com/subdir/?content=article&id=34 .........| mysite.com/subdir/article/34/ I've used mod_rewrite briefly, but I'm not sure how to approach these multiple variables. Also, how would I differentiate between the actually subfolders, and the content variable. As so to prevent 'subdir' or 'subdir2' from being plugged in as the content variable for the root site. I've played around with plenty of code snippets, but I've wiped my .htaccess slate clean, and approach you all in an attempt to help me repopulate it. Your input would thoroughly be appreciated. Note: The only time the page query string will be needed is when 'content' == 'content2' ?content=content2&page=4 **Same rule is shared by the article/id relationship, all other 'content' values are expected to be dynamic.

    Read the article

  • Approaches for cross server content sharing?

    - by Anonymity
    I've currently been tasked with finding a best solution to serving up content on our new site from another one of our other sites. Several approaches suggested to me, that I've looked into include using SharePoint's Lists Web Service to grab the list through javascript - which results in XSS and is not an option. Another suggestion was to build a server side custom web service and use SharePoint Request Forms to get the information - this is something I've only very briefly looked at. It's been suggested that I try permitting the requesting site in the HTTP headers of the serving site since I have access to both. This ultimately resulted in a semi-working solution that had major security holes. (I had to include username/password in the request to appease AD Authentication). This was done by allowing Access-Control-Allow-Origin: * The most direct approach I could think of was to simply build in the webpart in our new environment to have the authors manually update this content the same as they would on the other site. Are any one of the suggestions here more valid than another? Which would be the best approach? Are there other suggestions I may be overlooking? I'm also not sure if WebCrawling or Content Scrapping really holds water here...

    Read the article

  • What are the lengths/limits C preprocessor as a language creation tool? Where can I learn more about

    - by Weston C
    In his FAQ @ http://www2.research.att.com/~bs/bs_faq.html#bootstrapping, Bjarne Stroustrup says: To build [Cfront, the first C++ compiler], I first used C to write a "C with Classes"-to-C preprocessor. "C with Classes" was a C dialect that became the immediate ancestor to C++... I then wrote the first version of Cfront in "C with Classes". When I read this, it piqued my interest in the C preprocessor. I'd seen its macro capabilities as suitable for simplifying common expressions but hadn't thought about its ability to significantly add to syntax and semantics on the level that I imagine bringing classes to C took. So now I have a couple of questions on my mind: 1) Are there other examples of this approach to bootstrapping a language off of C? 2) Is the source to Stroustrup's original work available anywhere? 3) Where could I learn more about the specifics of utilizing this technique? 4) What are the lengths/limits of that approach? Could one, say, create a set of preprocessor macros that let someone write in something significantly Lisp/Scheme like?

    Read the article

  • DDD: Enum like entities

    - by Chris
    Hi all, I have the following DB model: **Person table** ID | Name | StateId ------------------------------ 1 Joe 1 2 Peter 1 3 John 2 **State table** ID | Desc ------------------------------ 1 Working 2 Vacation and domain model would be (simplified): public class Person { public int Id { get; } public string Name { get; set; } public State State { get; set; } } public class State { private int id; public string Name { get; set; } } The state might be used in the domain logic e.g.: if(person.State == State.Working) // some logic So from my understanding, the State acts like a value object which is used for domain logic checks. But it also needs to be present in the DB model to represent a clean ERM. So state might be extended to: public class State { private int id; public string Name { get; set; } public static State New {get {return new State([hardCodedIdHere?], [hardCodeNameHere?]);}} } But using this approach the name of the state would be hardcoded into the domain. Do you know what I mean? Is there a standard approach for such a thing? From my point of view what I am trying to do is using an object (which is persisted from the ERM design perspective) as a sort of value object within my domain. What do you think? Question update: Probably my question wasn't clear enough. What I need to know is, how I would use an entity (like the State example) that is stored in a database within my domain logic. To avoid things like: if(person.State.Id == State.Working.Id) // some logic or if(person.State.Id == WORKING_ID) // some logic

    Read the article

  • Avoid an "out of memory error" in Java(eclipse), when using large data structure?

    - by gnomed
    OK, so I am writing a program that unfortunately needs to use a huge data structure to complete its work, but it is failing with a "out of memory error" during its initialization. While I understand entirely what that means and why it is a problem, I am having trouble overcoming it, since my program needs to use this large structure and I don't know any other way to store it. The program first indexes a large corpus of text files that I provide. This works fine. Then it uses this index to initialize a large 2D array. This array will have nXn entries, where "n" is the number of unique words in the corpus of text. For the relatively small chunk I am testing it on(about 60 files) it needs to make approximately 30,000x30,000 entries. this will probably be bigger once I run it on my full intended corpus too. It consistently fails every time, after it indexes, while it is initializing the data structure(to be worked on later). Things I have done include: revamp my code to use a primitive "int[]" instead of a "TreeMap" eliminate redundant structures, etc... Also, I have run eclipse with "eclipse -vmargs -Xmx2g" to max out my allocated memory I am fairly confident this is not going to be a simple line of code solution, but is most likely going to require a very new approach. I am looking for what that approach is, any ideas? Thanks, B.

    Read the article

  • BinaryFormatter in C# a good way to read files?

    - by mr-pac
    I want to read a binary file which was created outside of my program. One obvious way in C# to read a binary file is to define class representing the file and then use a BinaryReader and read from the file via the Read* methods and assign the return values to the class properties. What I don't like with the approach is that I manually have to write code that reads the file, although the defined structure represents how the file is stored. I also have to keep the order correct when I read. After looking a bit around I came across the BinaryFormatter which can automatically serialize and deserialze object in binary format. One great advantage would be that I can read and also write the file without creating additional code. However I wonder if this approach is good for files created from other programs on not just serialized .NET objects. Take for example a graphics format file like BMP. Would it be a good idea to read the file with a BinaryFormatter or is it better to manually and write via BinaryReader and BinaryWriter? Or are there any other approaches which suit better? I'am not looking for concrete examples but just for an advice what is the best way to implement that.

    Read the article

  • How do you tell that your unit tests are correct?

    - by Jacob Adams
    I've only done minor unit testing at various points in my career. Whenever I start diving into it again, it always troubles me how to prove that my tests are correct. How can I tell that there isn't a bug in my unit test? Usually I end up running the app, proving it works, then using the unit test as a sort of regression test. What is the recommended approach and/or what is the approach you take to this problem? Edit: I also realize that you could write small, granular unit tests that would be easy to understand. However, if you assume that small, granular code is flawless and bulletproof, you could just write small, granular programs and not need unit testing. Edit2: For the arguments "unit testing is for making sure your changes don't break anything" and "this will only happen if the test has the exact same flaw as the code", what if the test overfits? It's possible to pass both good and bad code with a bad test. My main question is what good is unit testing since if your tests can be flawed you can't really improve your confidence in your code, can't really prove your refactoring worked, and can't really prove that you met the specification?

    Read the article

  • Loading Dimension Tables - Methodologies

    - by Nev_Rahd
    Hello, Recently I been working on project, where need to populated Dim Tables from EDW Tables. EDW Tables are of type II which does maintain historical data. When comes to load Dim Table, for which source may be multiple EDW Tables or would be single table with multi level pivoting (on attributes). Mean: There would be 10 records - one for each attribute which need to be pivoted on domain_code to make a single row in Dim. Out of these 10 records there would be some attributes with same domain_code but with different sub_domain_code, which needs further pivoting on subdomain code. Ex: if i got domain code: 01,02, 03 = which are straight pivot on domain code I would also have domain code: 10 with subdomain code / version as 2006,2007,2008,2009 That means I need to split my source table with above attributes into two = one for domain code and other for domain_code + version. so far so good. When it comes to load Dim Table: As per design specs for Dimensions (originally written by third party), what they want is: for every single change in EDW (attribute), it should assemble all the related records (for that NK) mean new one with other attribute values which are current = process them to create a new dim record and insert it. That mean if a single extract contains 100 records updated (one for each NK), it should assemble 100 + (100*9) records to insert / update dim table. How good is this approach. Other way I tried to do is just do a lookup into dim table for that NK get the value's of recent records (attributes which not changed) and insert it and update the current one. What would be the better approach assembling records at source side for one attribute change or looking into dim table's recent record and process it. If this doesn't make sense, would like to elaborate it further. Thanks

    Read the article

  • Which web framework or technologies would suit me?

    - by Suraj Chandran
    Hi, I had been working on desktop apps and server side(non web) for some time and now I am diving in to web first time. I plan to write a scalable enterprise level app. I have worked with Java, Javascript, Jquery etc. but I absolutely hate jsp. So is there any framework that focuses on developing enterprise level web apps without jsp. I liked Wicket's approach, but I think there is a little lack of support of dynamic html in it and jquery(yes i looked at wiquery). Also I feel making wicket apps scalable would take some sweat. Can Spring MVC, Struts2 etc. help me make with this with just using say Java, JavaScript, and JQuery. Or are there any other options for me like Wicket. Please do forgive if anything above looks insane, I am still working on my understanding with enterprise web apps. NOTE: If you think that I should take a different direction or approach, please do suggest!

    Read the article

  • Reference a GNU C DLL built in GCC against Cygwin, from C#/NET

    - by Dale Halliwell
    Here is what I want: I have a huge legacy C/C++ codebase written for POSIX, including some very POSIX specific stuff like pthreads. This can be compiled on Cygwin/GCC and run as an executable under Windows with the Cygwin DLL. What I would like to do is build the codebase itself into a Windows DLL that I can then reference from C# and write a wrapper around it to access some parts of it programatically. I have tried this approach with the very simple "hello world" example at http://www.cygwin.com/cygwin-ug-net/dll.html and it doesn't seem to work. #include <stdio.h> extern "C" __declspec(dllexport) int hello(); int hello() { printf ("Hello World!\n"); return 42; } I believe I should be able to reference a DLL built with the above code in C# using something like: [DllImport("kernel32.dll")] public static extern IntPtr LoadLibrary(string dllToLoad); [DllImport("kernel32.dll")] public static extern IntPtr GetProcAddress(IntPtr hModule, string procedureName); [DllImport("kernel32.dll")] public static extern bool FreeLibrary(IntPtr hModule); [UnmanagedFunctionPointer(CallingConvention.Cdecl)] private delegate int hello(); static void Main(string[] args) { var path = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "helloworld.dll"); IntPtr pDll = LoadLibrary(path); IntPtr pAddressOfFunctionToCall = GetProcAddress(pDll, "hello"); hello hello = (hello)Marshal.GetDelegateForFunctionPointer( pAddressOfFunctionToCall, typeof(hello)); int theResult = hello(); Console.WriteLine(theResult.ToString()); bool result = FreeLibrary(pDll); Console.ReadKey(); } But this approach doesn't seem to work. LoadLibrary returns null. It can find the DLL (helloworld.dll), it is just like it can't load it or find the exported function. I am sure that if I get this basic case working I can reference the rest of my codebase in this way. Any suggestions or pointers, or does anyone know if what I want is even possible? Thanks.

    Read the article

  • Dependent Dropdowns with single class element

    - by AJ
    I am trying to create multiple dependent dropdowns (selects) with a unique way. I want to restrict the use of selectors and want to achieve this by using a single class selectors on all SELECTs; by figuring out the SELECT that was changed by its index. Hence, the SELECT[i] that changed will change the SELECT[i+1] only (and not the previous ones like SELECT[i-1]). For html like this: <select class="someclass"> <option value="volvo">Volvo</option> <option value="saab">Saab</option> <option value="mercedes">Mercedes</option> <option value="audi">Audi</option> </select> <select class="someclass"> </select> <select class="someclass"> </select> where the SELECTs other than the first one will get something via AJAX. I see that the following Javascript gives me correct value of the correct SELECT and the value of i also corresponds to the correct SELECT. $(function() { $(".someclass").each(function(i) { $(this).change(function(x) { alert($(this).val() + i); }); }); }); Please note that I really want the minimum selector approach. I just cannot wrap my head around on how to approach this. Should I use Arrays? Like store all the SELECTS in an Array first and then select those? I believe that since the above code already passes the index i, there should be a way without Arrays also. Thanks a bunch. AJ

    Read the article

  • Emulating Test::More::done_testing - what is the most idiomatic way?

    - by DVK
    I have to build unit tests for in environment with a very old version of Test::More (perl5.8 with $Test::More::VERSION being '0.80') which predates the addition of done_testing(). Upgrading to newer Test::More is out of the question for practical reasons. And I am trying to avoid using no_tests - it's generally a bad idea not catching when your unit test dies prematurely. What is the most idiomatic way of running a configurable amount of tests, assuming no no_tests or done_testing() is used? Details: My unit tests usually take the form of: use Test::More; my @test_set = ( [ "Test #1", $param1, $param2, ... ] ,[ "Test #1", $param1, $param2, ... ] # ,... ); foreach my $test (@test_set) { run_test($test); } sub run_test { # $expected_tests += count_tests($test); ok(test1($test)) || diag("Test1 failed"); # ... } The standard approach of use Test::More tests => 23; or BEGIN {plan tests => 23} does not work since both are obviously executed before @tests is known. My current approach involves making @tests global and defining it in the BEGIN {} block as follows: use Test::More; BEGIN { our @test_set = (); # Same set of tests as above my $expected_tests = 0; foreach my $test (@tests) { my $expected_tests += count_tests($test); } plan tests = $expected_tests; } our @test_set; # Must do!!! Since first "our" was in BEGIN's scope :( foreach my $test (@test_set) { run_test($test); } # Same sub run_test {} # Same I feel this can be done more idiomatically but not certain how to improve. Chief among the smells is the duplicate our @test_test declarations - in BEGIN{} and after it.

    Read the article

  • Dependent Dropdowns (Linked Selects) with single class (no other selectors)

    - by AJ
    I am trying to create multiple dependent dropdowns (selects) with a unique way. I want to restrict the use of selectors and want to achieve this by using a single class selectors on all SELECTs; by figuring out the SELECT that was changed by its index. Hence, the SELECT[i] that changed will change the SELECT[i+1] only (and not the previous ones like SELECT[i-1]). For html like this: <select class="someclass"> <option value="volvo">Volvo</option> <option value="saab">Saab</option> <option value="mercedes">Mercedes</option> <option value="audi">Audi</option> </select> <select class="someclass"> </select> <select class="someclass"> </select> where the SELECTs other than the first one will get something via AJAX. I see that the following Javascript gives me correct value of the correct SELECT and the value of i also corresponds to the correct SELECT. $(function() { $(".someclass").each(function(i) { $(this).change(function(x) { alert($(this).val() + i); }); }); }); Please note that I really want the minimum selector approach. I just cannot wrap my head around on how to approach this. Should I use Arrays? Like store all the SELECTS in an Array first and then select those? I believe that since the above code already passes the index i, there should be a way without Arrays also. Thanks a bunch. AJ

    Read the article

  • How does one force construction of a global object in a statically linked library? [MSVC9]

    - by Peter C O Johansson
    I have a global list of function pointers. This list should be populated at startup. Order is not important and there are no dependencies that would complicate static initialization. To facilitate this, I've written a class that adds a single entry to this list in its constructor, and scatter global instances of this class via a macro where necessary. One of the primary goals of this approach is to remove the need for explicitly referencing every instance of this class externally, instead allowing each file that needs to register something in the list to do it independently. Nice and clean. However, when placing these objects in a static library, the linker discards (or rather never links in) these units because no code in them is explicitly referenced. Explicitly referencing symbols in the compilation units would be counterproductive, directly contradicting one of the main goals of the approach. For the same reason, /INCLUDE is not an acceptable option, and /OPT:NOREF is not actually related to this problem. Metrowerks has a __declspec directive for it, GCC has -force_load, but I cannot find any equivalent for MSVC.

    Read the article

  • What algorithms compute directions from point A to point B on a map?

    - by A. Rex
    How do map providers (such as Google or Yahoo! Maps) suggest directions? I mean, they probably have real-world data in some form, certainly including distances but also perhaps things like driving speeds, presence of sidewalks, train schedules, etc. But suppose the data were in a simpler format, say a very large directed graph with edge weights reflecting distances. I want to be able to quickly compute directions from one arbitrary point to another. Sometimes these points will be close together (within one city) while sometimes they will be far apart (cross-country). Graph algorithms like Dijkstra's algorithm will not work because the graph is enormous. Luckily, heuristic algorithms like A* will probably work. However, our data is very structured, and perhaps some kind of tiered approach might work? (For example, store precomputed directions between certain "key" points far apart, as well as some local directions. Then directions for two far-away points will involve local directions to a key points, global directions to another key point, and then local directions again.) What algorithms are actually used in practice? PS. This question was motivated by finding quirks in online mapping directions. Contrary to the triangle inequality, sometimes Google Maps thinks that X-Z takes longer and is farther than using an intermediate point as in X-Y-Z. But maybe their walking directions optimize for another parameter, too? PPS. Here's another violation of the triangle inequality that suggests (to me) that they use some kind of tiered approach: X-Z versus X-Y-Z. The former seems to use prominent Boulevard de Sebastopol even though it's slightly out of the way. (Edit: this example doesn't work anymore, but did at the time of the original post. The one above still works as of early November 2009.)

    Read the article

  • Uniquing with Existing Core Data Entities

    - by warrenm
    I'm using Core Data to store a lot (1000s) of items. A pair of properties on each item are used to determine uniqueness, so when a new item comes in, I compare it against the existing items before inserting it. Since the incoming data is in the form of an RSS feed, there are often many duplicates, and the cost of the uniquing step is O(N^2), which has become significant. Right now, I create a set of existing items before iterating over the list of (possible) new items. My theory is that on the first iteration, all the items will be faulted in, and assuming we aren't pressed for memory, most of those items will remain resident over the course of the iteration. I see my options thusly: Use string comparison for uniquing, iterating over all "new" items and comparing to all existing items (Current approach) Use a predicate to filter the set of existing items against the properties of the "new" items. Use a predicate with Core Data to determine uniqueness of each "new" item (without retrieving the set of existing items). Is option 3 likely to be faster than my current approach? Do you know of a better way?

    Read the article

  • How strict should I be in the "do the simplest thing that could possible work" while doing TDD

    - by Support - multilanguage SO
    For TDD you have to Create a test that fail Do the simplest thing that could possible work to pass the test Add more variants of the test and repeat Refactor when a pattern emerge With this approach you're supposing to cover all the cases ( that comes to my mind at least) but I'm wonder if am I being too strict here and if it is possible to "think ahead" some scenarios instead of simple discover them. For instance, I'm processing a file and if it doesn't conform to a certain format I am to throw an InvalidFormatException So my first test was: @Test void testFormat(){ // empty doesn't do anything... processor.validate("empty.txt"); try { processor.validate("invalid.txt"); assert false: "Should have thrown InvalidFormatException"; } catch( InvalidFormatException ife ) { assert "Invalid format".equals( ife.getMessage() ); } } I run it and it fails because it doesn't throw an exception. So the next thing that comes to my mind is: "Do the simplest thing that could possible work", so I : public void validate( String fileName ) throws InvalidFormatException { if(fileName.equals("invalid.txt") { throw new InvalidFormatException("Invalid format"); } } Doh!! ( although the real code is a bit more complicated, I found my self doing something like this several times ) I know that I have to eventually add another file name and other test that would make this approach impractical and that would force me to refactor to something that makes sense ( which if I understood correctly is the point of TDD, to discover the patterns the usage unveils ) but: Q: am I taking too literal the "Do the simplest thing..." stuff?

    Read the article

  • How to avoid hard coding credentials into Sharepoint webpart?

    - by SeeBees
    I am building a Sharepoint web part that will be used by all users. The web part connects to a web service which needs credentials with higher privileges than common users. I hard coded credentials in the web part's code. query.Credentials = new System.Net.NetworkCredential("username", "password", "domain"); query is an instance of the web service class This may not be a good approach. In regard with security, source code of the web apart is available to people who are not allowed to see the credential. This is bad enough, But is there any other drawback of this approach? A web part doesn't have a .config file associated. The .config file is in application-level of the sharepoint site, and I don't want to modify it for a single webpart. I wonder if there is a webpart-specific way to solve this problem? Say provide a WebBrowsable property to an admin so that he/she can set credentials. Is this possible? Thanks

    Read the article

  • MS Excel automation without macros in the generated reports. Any thoughts?

    - by ezeki77
    Hello! I know that the web is full of questions like this one, but I still haven't been able to apply the answers I can find to my situation. I realize there is VBA, but I always disliked having the program/macro living inside the Excel file, with the resulting bloat, security warnings, etc. I'm thinking along the lines of a VBScript that works on a set of Excel files while leaving them macro-free. Now, I've been able to "paint the first column blue" for all files in a directory following this approach, but I need to do more complex operations (charts, pivot tables, etc.), which would be much harder (impossible?) with VBScript than with VBA. For this specific example knowing how to remove all macros from all files after processing would be enough, but all suggestions are welcome. Any good references? Any advice on how to best approach external batch processing of Excel files will be appreciated. Thanks! PS: I eagerly tried Mark Hammond's great PyWin32 package, but the lack of documentation and interpreter feedback discouraged me.

    Read the article

  • Implementing a bitfield using java enums

    - by soappatrol
    Hello, I maintain a large document archive and I often use bit fields to record the status of my documents during processing or when validating them. My legacy code simply uses static int constants such as: static int DOCUMENT_STATUS_NO_STATE = 0 static int DOCUMENT_STATUS_OK = 1 static int DOCUMENT_STATUS_NO_TIF_FILE = 2 static int DOCUMENT_STATUS_NO_PDF_FILE = 4 This makes it pretty easy to indicate the state a document is in, by setting the appropriate flags. For example: status = DOCUMENT_STATUS_NO_TIF_FILE | DOCUMENT_STATUS_NO_PDF_FILE; Since the approach of using static constants is bad practice and because I would like to improve the code, I was looking to use Enums to achieve the same. There are a few requirements, one of them being the need to save the status into a database as a numeric type. So there is a need to transform the enumeration constants to a numeric value. Below is my first approach and I wonder if this is the correct way to go about this? class DocumentStatus{ public enum StatusFlag { DOCUMENT_STATUS_NOT_DEFINED(1<<0), DOCUMENT_STATUS_OK(1<<1), DOCUMENT_STATUS_MISSING_TID_DIR(1<<2), DOCUMENT_STATUS_MISSING_TIF_FILE(1<<3), DOCUMENT_STATUS_MISSING_PDF_FILE(1<<4), DOCUMENT_STATUS_MISSING_OCR_FILE(1<<5), DOCUMENT_STATUS_PAGE_COUNT_TIF(1<<6), DOCUMENT_STATUS_PAGE_COUNT_PDF(1<<7), DOCUMENT_STATUS_UNAVAILABLE(1<<8), private final long statusFlagValue; StatusFlag(long statusFlagValue) { this.statusFlagValue = statusFlagValue } public long getStatusFlagValue(){ return statusFlagValue } } /** * Translates a numeric status code into a Set of StatusFlag enums * @param numeric statusValue * @return EnumSet representing a documents status */ public EnumSet<StatusFlag> getStatusFlags(long statusValue) { EnumSet statusFlags = EnumSet.noneOf(StatusFlag.class) StatusFlag.each { statusFlag -> long flagValue = statusFlag.statusFlagValue if ( (flagValue&statusValue ) == flagValue ) { statusFlags.add(statusFlag) } } return statusFlags } /** * Translates a set of StatusFlag enums into a numeric status code * @param Set if statusFlags * @return numeric representation of the document status */ public long getStatusValue(Set<StatusFlag> flags) { long value=0 flags.each { statusFlag -> value|=statusFlag.getStatusFlagValue() } return value } public static void main(String[] args) { DocumentStatus ds = new DocumentStatus(); Set statusFlags = EnumSet.of( StatusFlag.DOCUMENT_STATUS_OK, StatusFlag.DOCUMENT_STATUS_UNAVAILABLE) assert ds.getStatusValue( statusFlags )==258 // 0000.0001|0000.0010 long numericStatusCode = 56 statusFlags = ds.getStatusFlags(numericStatusCode) assert !statusFlags.contains(StatusFlag.DOCUMENT_STATUS_OK) assert statusFlags.contains(StatusFlag.DOCUMENT_STATUS_MISSING_TIF_FILE) assert statusFlags.contains(StatusFlag.DOCUMENT_STATUS_MISSING_PDF_FILE) assert statusFlags.contains(StatusFlag.DOCUMENT_STATUS_MISSING_OCR_FILE) } }

    Read the article

  • Where to start when doing a Domain Model?

    - by devoured elysium
    Let's say I've made a list of concepts I'll use to draw my Domain Model. Furthermore, I have a couple of Use Cases from which I did a couple of System Sequence Diagrams. When drawing the Domain Model, I never know where to start from: Designing the model as I believe the system to be. This is, if I am modelling a the human body, I start by adding the class concepts of Heart, Brain, Bowels, Stomach, Eyes, Head, etc. Start by designing what the Use Cases need to get done. This is, if I have a Use Case which is about making the human body swallow something, I'd first draw the class concepts for Mouth, Throat, Stomatch, Bowels, etc. The order in which I do things is irrelevant? I'd say probably it'd be best to try to design from the Use Case concepts, as they are generally what you want to work with, not other kind of concepts that although help describe the whole system well, much of the time might not even be needed for the current project. Is there any other approach that I am not taking in consideration here? How do you usually approach this? Thanks

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >