Search Results

Search found 8275 results on 331 pages for 'bad appz'.

Page 273/331 | < Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >

  • Type casting in C++ by detecting the current 'this' object type

    - by Elroy
    My question is related to RTTI in C++ where I'm trying to check if an object belongs to the type hierarchy of another object. The BelongsTo() method checks this. I tried using typeid, but it throws an error and I'm not sure about any other way how I can find the target type to convert to at runtime. #include <iostream> #include <typeinfo> class X { public: // Checks if the input type belongs to the type heirarchy of input object type bool BelongsTo(X* p_a) { // I'm trying to check if the current (this) type belongs to the same type // hierarchy as the input type return dynamic_cast<typeid(*p_a)*>(this) != NULL; // error C2059: syntax error 'typeid' } }; class A : public X { }; class B : public A { }; class C : public A { }; int main() { X* a = new A(); X* b = new B(); X* c = new C(); bool test1 = b->BelongsTo(a); // should return true bool test2 = b->BelongsTo(c); // should return false bool test3 = c->BelongsTo(a); // should return true } Making the method virtual and letting derived classes do it seems like a bad idea as I have a lot of classes in the same type hierarchy. Or does anybody know of any other/better way to the do the same thing? Please suggest.

    Read the article

  • Is it rude to add "TODO: wtf?" in source code?

    - by mafutrct
    I encountered something ... well, you know TDWTF... something like that in an international project I'm working on. The code was written by a team mate. For a second I was tempted to add // TODO: wtf? to the infringing code but restrained myself. The project is indeed on a professional level, but for internal conversation, more colloquial language is used - but still no "bad" words as in "wtf". Usually, I'd surely not add such a comment, but I believe there are a few factors that allow consideration still: 1. It is not visible except as a comment in the source code (of course). 2. It is internal to our team - other developers may happen see it but it is not their code. 3. Comments in source code are usually accepted to be more colloquial, since it is "kept between us developers". Would you totally advise to never add such a comment? Or do you regard it as an edge case? Did you possibly add something similar yourself?

    Read the article

  • ASP.NET MVC - ValidationSummary set from a different controller

    - by Rap
    I have a HomeController with an Index action that shows the Index.aspx view. It has a username/password login section. When the user clicks the submit button, it POSTs to a Login action in the AccountController. <% Html.BeginForm("Login", "Account", FormMethod.Post); %> In that action, it tests for Username/Password validity and if invalid, sends the user back to the Login page with a message that the credentials were bad. [HttpPost] public ActionResult Login(LoginViewModel Model, string ReturnUrl) { User user = MembershipService.ValidateUser(Model.UserName, Model.Password); if (user != null) { //Detail removed here FormsService.SignIn(user.ToString(), Model.RememberMe); return Redirect(ReturnUrl); } else { ModelState.AddModelError("", "The user name or password provided is incorrect."); } // If we got this far, something failed, redisplay form return RedirectToAction("Index", "Home"); // <-- Here is the problem. ModelState is lost. } But here's the problem: the ValidationSummary is always blank because we're losing the Model when we RedirectToAction. How do I send the user to the action on a different controller without a Redirect?

    Read the article

  • Finding the width of a directed acyclic graph... with only the ability to find parents

    - by Platinum Azure
    Hi guys, I'm trying to find the width of a directed acyclic graph... as represented by an arbitrarily ordered list of nodes, without even an adjacency list. The graph/list is for a parallel GNU Make-like workflow manager that uses files as its criteria for execution order. Each node has a list of source files and target files. We have a hash table in place so that, given a file name, the node which produces it can be determined. In this way, we can figure out a node's parents by examining the nodes which generate each of its source files using this table. That is the ONLY ability I have at this point, without changing the code severely. The code has been in public use for a while, and the last thing we want to do is to change the structure significantly and have a bad release. And no, we don't have time to test rigorously (I am in an academic environment). Ideally we're hoping we can do this without doing anything more dangerous than adding fields to the node. I'll be posting a community-wiki answer outlining my current approach and its flaws. If anyone wants to edit that, or use it as a starting point, feel free. If there's anything I can do to clarify things, I can answer questions or post code if needed. Thanks! EDIT: For anyone who cares, this will be in C. Yes, I know my pseudocode is in some horribly botched Python look-alike. I'm sort of hoping the language doesn't really matter.

    Read the article

  • Whose fault is a NullReferenceException?

    - by stefan.at.wpf
    I'm currently working on a class which exposes an internal List through a property. The List shall and can be modified. The problem is, entries in the internal list could be set to null from outside the class. My code actually looks like this: class ClassWithList { List<object> _list = new List<object>(); // get accessor, which however returns the reference to the list, // therefore the list can be modified (this is intended) public List<object> Data { get { return _list; } } private void doSomeWorkWithTheList() { foreach(object obj in _list) // do some work with the objects in the list without checking for null. } } So now in the doSomeWorkWithTheList() I could always check whether the current list entry is null or I could just asume that the person using this class doesn't have the great idea to set entries to null. So finally the questions end up in: Whose fault is a NullReferenceException in this case? Is it the fault of the class developer not checking everything for null (which would make code generally - not only in this class - more complex) or is it the fault of the user of this class, as setting a List entry to null doesn't really make sense? I'd tend to generally not check values for null except in some really special cases. Is this a bad style or de facto standard / standard in praxis? I know there's probably no ultimate answer for this, I'm just missing enough experience for such thing and therefore wondering what other developers think about such cases and want to hear what's done in reality about checking null (or not).

    Read the article

  • javax.xml.ws.soap.SOAPFaultException: Could not send Message - at JaxWsClientProxy.invoke - caused by HTTP response code: 401 for URL

    - by Mikkis
    I moved a working code from dev to test and encountered the following error(s) in test: javax.xml.ws.soap.SOAPFaultException: Could not send Message. at org.apache.cxf.jaxws.JaxWsClientProxy.invoke(JaxWsClientProxy.java:143) ...... at org.apache.cxf.interceptor.MessageSenderInterceptor$MessageSenderEndingInterceptor.handleMessage(MessageSenderInterceptor.java:64) at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:236) at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:472) at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:302) at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:254) at org.apache.cxf.frontend.ClientProxy.invokeSync(ClientProxy.java:73) at org.apache.cxf.jaxws.JaxWsClientProxy.invoke(JaxWsClientProxy.java:123) at $Proxy739.copyIntoItems(Unknown Source) Caused by: java.io.IOException: Server returned HTTP response code: 401 for URL: http:///_vti_bin/Copy.asmx at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1436) at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:379) at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.handleResponseInternal(HTTPConduit.java:2046) Environment specs: Java 1.6 Tomcat 6 Eclipse Helios Maven2 CXF 2.2.3 As a background work, tried to explore about the error in similar category bad URL (ruled out as i am using same URL in dev and test. and the url, userid, password are all accessible from both the machines), connection timeout( error is not 404 or it doesnt specify connection timed out... it says 401 response code for url) Checked if all the jars and same versions are included in the test environment. Can someone shed some light to understand and resolve the error? please let me know if any more details are to be included.

    Read the article

  • Delphi: How to avoid EIntOverflow underflow when subtracting?

    - by Ian Boyd
    Microsoft already says, in the documentation for GetTickCount, that you could never compare tick counts to check if an interval has passed. e.g.: Incorrect (pseudo-code): DWORD endTime = GetTickCount + 10000; //10 s from now ... if (GetTickCount > endTime) break; The above code is bad because it is suceptable to rollover of the tick counter. For example, assume that the clock is near the end of it's range: endTime = 0xfffffe00 + 10000 = 0x00002510; //9,488 decimal Then you perform your check: if (GetTickCount > endTime) Which is satisfied immediatly, since GetTickCount is larger than endTime: if (0xfffffe01 > 0x00002510) The solution Instead you should always subtract the two time intervals: DWORD startTime = GetTickCount; ... if (GetTickCount - startTime) > 10000 //if it's been 10 seconds break; Looking at the same math: if (GetTickCount - startTime) > 10000 if (0xfffffe01 - 0xfffffe00) > 10000 if (1 > 10000) Which is all well and good in C/C++, where the compiler behaves a certain way. But what about Delphi? But when i perform the same math in Delphi, with overflow checking on ({Q+}, {$OVERFLOWCHECKS ON}), the subtraction of the two tick counts generates an EIntOverflow exception when the TickCount rolls over: if (0x00000100 - 0xffffff00) > 10000 0x00000100 - 0xffffff00 = 0x00000200 What is the intended solution for this problem? Edit: i've tried to temporarily turn off OVERFLOWCHECKS: {$OVERFLOWCHECKS OFF}] delta = GetTickCount - startTime; {$OVERFLOWCHECKS ON} But the subtraction still throws an EIntOverflow exception. Is there a better solution, involving casts and larger intermediate variable types?

    Read the article

  • How to re-prompt after a trap return in bash?

    - by verbose
    I have a script that is supposed to trap SIGTERM and SIGTSTP. This is what I have in the main block: trap 'killHandling' TERM And in the function: killHandling () { echo received kill signal, ignoring return } ... and similar for SIGINT. The problem is one of user interface. The script prompts the user for some input, and if the SIGTERM or SIGINT occurs when the script is waiting for input, it's confusing. Here is the output in that case: Enter something: # SIGTERM received received kill signal, ignoring # shell waits at blank line for user input, user gets confused # user hits "return", which then gets read as blank input from the user # bad things happen because of the blank input I have definitely seen scripts which handle this more elegantly, like so: Enter something: # SIGTERM received received kill signal, ignoring Enter something: # re-prompts user for user input, user is not confused What is the mechanism used to accomplish the latter? Unfortunately I can't simply change my trap code to do the re-prompt as the script prompts the user for several things and what the prompt says is context-dependent. And there has to be a better way than writing context-dependent trap functions. I'd be very grateful for any pointers. Thanks!

    Read the article

  • Boost Property_Tree iterators, how to handle them?

    - by Andry
    Hello... I am sorry but I asked a question about the same argument before, but my problem concerns another aspect of the one described in that question (How to iterate a boost...). My problem is this, take a look at the following code: #include <iostream> #include <string> #include <boost/property_tree/ptree.hpp> #include <boost/property_tree/xml_parser.hpp> #include <boost/algorithm/string/trim.hpp> int main(int argc, char** argv) { using boost::property_tree::ptree; ptree pt; read_xml("try.xml", pt); ptree::const_iterator end = pt.end(); for (ptree::const_iterator it = pt.begin(); it != end; it++) std::cout << "Here " << it->? << std::endl; } Well, as told me in the previous question, there is the possibility to use iterators on property_tree in Boost, but I do not know what type it is... and what methods or properties I can use... Well, I assume that it must be another ptree or something representing another xml hierarchy to be browsed again (if I want) but documentation about this is very bad... I do not know why, but in boost docs I cannot find nothing good about this... just something about a macro to browse nodes, but this approach is one I would really like to avoid... Well, the question is so... Once getting the iterator on a ptree, how can I access node name, value, parameters (a node in a xml file)? Thankyou

    Read the article

  • Why does the entity framework need an ICollection for lazy loading?

    - by Akk
    I want to write a rich domain class such as public class Product { public IEnumerable<Photo> Photos {get; private set;} public void AddPhoto(){...} public void RemovePhoto(){...} } But the entity framework (V4 code first approach) requires an ICollection type for lazy loading! The above code no longer works as designed since clients can bypass the AddPhoto / RemovePhoto method and directly call the add method on ICollection. This is not good. public class Product { public ICollection<Photo> Photos {get; private set;} //Bad public void AddPhoto(){...} public void RemovePhoto(){...} } It's getting really frustrating trying to implement DDD with the EF4. Why did they choose the ICollection for lazy loading? How can i overcome this? Does NHibernate offer me a better DDD experience?

    Read the article

  • Rand(); with exclusion to and already randomly generated number..?

    - by Stefan
    Hey, I have a function which calls a users associated users from a table. The function then uses the rand(); function to chose from the array 5 randomly selected userID's however!... In the case where a user doesnt have many associated users but above the min (if below the 5 it just returns the array as it is) then it gives bad results due to repeat rand numbers... How can overcome this or exclude a previously selected rand number from the next rand(); function call. Here is the section of code doing the work. Bare in mind this must be highly efficient as this script is used everywhere. $size = sizeof($users)-1; $nusers[0] = $users[rand(0,$size)]; $nusers[1] = $users[rand(0,$size)]; $nusers[2] = $users[rand(0,$size)]; $nusers[3] = $users[rand(0,$size)]; $nusers[4] = $users[rand(0,$size)]; return $nusers; Thanks in advance! Stefan

    Read the article

  • Find numbers that equals a sum in an array

    - by valli-R
    I want to find the first set of integers in an array X that the sum equals a given number N. For example: X = {5, 13, 24, 9, 3, 3} N = 28 Solution = {13, 9, 3, 3} Here what I have so far : WARNING, I know it uses global and it is bad,that's not the point of the question. <?php function s($index = 0, $total = 0, $solution = '') { global $numbers; global $sum; echo $index; if($total == 28) { echo '<br/>'.$solution.' = '.$sum.'<br/>'; } elseif($index < count($numbers) && $total != 28) { s($index + 1, $total, $solution); s($index + 1, $total + $numbers[$index], $solution.' '.$numbers[$index]); } } $numbers = array(5, 13, 24, 9, 3, 3); $sum = 28; s(); ?> I don't get how I can stop the process when it finds the solution.. I know I am not far from good solution.. Thanks in advance

    Read the article

  • Is there a definitive reference document for Ruby syntax?

    - by JSW
    I'm searching for a definitive document on Ruby syntax. I know about the definitive documents for the core API and standard library, but what about the syntax itself? For instance, such a document should cover: reserved words, string literals syntax, naming rules for variables/classes/modules, all the conditional statements and their permutations, and so forth. I know there are many books and tutorials, yes, but every one of them is essentially a tutorial, each one having a range of different depth and focus. They will all, by necessity of brevity and narrative flow, omit certain details of the language that the author deems insignificant. For instance, did you know that you can use a case statement without an initial case value, and it will then execute the first true when clause? Any given Ruby book or tutorial may or may not cover that particular lesser-known functionality of the case syntax. It's not discussed in the section in "Programming Ruby" about case statements. But that is just one small example. So far the best documentation I've found is the rubyspec project, which appears to be an attempt to write a complete test suite for the language. That's not bad, but it's a bit hard to use from a practical standpoint as a developer working on my own projects. Am I just missing something or is there really no definitive readable document defining the whole of Ruby syntax?

    Read the article

  • Special Characters in JS, how to use "/" character

    - by user1461222
    I've vbulletin 4.2.0 i added an special button to it's editor with this article; http://www.vbulletinguru.com/2012/add-a-new-toolbar-button-to-ckeditor-tutorial/ The thing i want to do is add an syntax highlighter code with this button. When i use below code it's working fine; CKEDITOR.plugins.add( 'YourPluginName', { init: function( editor ) { editor.addCommand( 'SayHello', { exec : function( editor ) { editor.insertHtml( "Hello from my plugin" ); } }); editor.ui.addButton( 'YourPluginName', { label: 'My Button Tooltip', command: 'SayHello', icon: this.path + 'YourPluginImage.png' } ); } } ); so i changed this code to this, because i wannt to add specific text like below; CKEDITOR.plugins.add( 'DKODU', { init: function( editor ) { editor.addCommand( 'SayHello', { exec : function( editor ) { editor.insertHtml( '[kod=delphi][/kod]' ); } }); editor.ui.addButton( 'DKODU', { label: 'My Button Tooltip', command: 'SayHello', icon: this.path + 'star.png' } ); } } ); after update the code when i press the button nothings happen, i checked with google and this site but i couldn't figure it out i think i made mistake with some special characters but i couldn't find what's the problem. If i made some mistakes when i publish this question forgive me and also forgive me for my bad english, thanks.

    Read the article

  • How to add an extra plist property using CMake?

    - by Jesse Beder
    I'm trying to add the item <key>UIStatusBarHidden</key><true/> to my plist that's auto-generated by CMake. For certain keys, it appears there are pre-defined ways to add an item; for example: set(MACOSX_BUNDLE_ICON_FILE ${ICON}) But I can't find a way to add an arbitrary property. I tried using the MACOSX_BUNDLE_INFO_PLIST target property as follows: I'd like the resulting plist to be identical to the old one, except with the new property I want, so I just copied the auto-generated plist and set that as my template. But the plist uses some Xcode variables, which also look like ${foo}, and CMake grumbles about this: Syntax error in cmake code when parsing string <string>com.bedaire.${PRODUCT_NAME:identifier}</string> syntax error, unexpected cal_SYMBOL, expecting } (47) Policy CMP0010 is not set: Bad variable reference syntax is an error. Run "cmake --help-policy CMP0010" for policy details. Use the cmake_policy command to set the policy and suppress this warning. This warning is for project developers. Use -Wno-dev to suppress it. In any case, I'm not even sure that this is the right thing to do. I can't find a good example or any good documentation about this. Ideally, I'd just let CMake generate everything as before, and just add a single extra line. What can I do?

    Read the article

  • JavaScript window object element properties

    - by Timothy
    A coworker showed me the following code and asked me why it worked. <span id="myspan">Do you like my hat?</span> <script type="text/javascript"> var spanElement = document.getElementById("myspan"); alert("Here I am! " + spanElement.innerHTML + "\n" + myspan.innerHTML); </script> I explained that a property is attached to the window object with the name of the element's id when the browser parses the document which then contains a reference to the appropriate dom node. It's sort of as if window.myspan = document.getElementById("myspan") is called behind the scenes as the page is being rendered. The ensuing discussion we had raised a few of questions: The window object and most of the DOM are not part of the official JavaScript/ECMA standards, but is the above behavior documented in any other official literature, perhaps browser-related? The above works in a browser (at least the main contenders) because there is a window object, but fails in something like rhino. Is writing code that relys on this considered bad practice because it makes too many assumptions about the execution environment? Are there any browsers in which the above would fail, or is this considered standard behavior across the board? Does anyone here know the answers to those questions and would be willing to enlighten me? I tried a quick internet search, but I admit I'm not sure how to even properly phrase the query. Pointers to references and documentation are welcome.

    Read the article

  • Over Optimistic Daily Productivity

    - by Dan Revell
    I'm a junior developer and have been working since I graduated last summer so coming up to a year now. I have this issue that is starting to get to me. Every night I think back to what I did that day, feel bad that I didn't get as much done as I would have liked and then tick off in my head all the things I'll get done the following day. Come the end of the following day I end haven't gotten through half of what I wanted to. This over optimism that I'm suffering from. Might it be just because I'm relatively new to the profession and aren't aware of how long things will actually take me. The work might be quick to think through in my head but all sorts of time sync's involved can bleed away the hours. If not that then perhaps it's the technology stack that I'm working on. SharePoint isn't the easiest thing to develop for and it's certainly something I came into not knowing a whole lot about. If it's because I'm not yet skilled enough to predict how long things will take me, is this trait of over optimistic predictions universal to the profession? I'd appreciate any input from those experienced with working with younger developers and those that might have suffered from this themselves. [EDIT] Perhaps I worded the question badly. I'm interested in just general day to day work rather than overall project completion estimation.

    Read the article

  • Need opinions on LaTeX and ever upgrading

    - by yCalleecharan
    Hi, I've been using LaTeX since 2005 with the TeXLive distribution and I've been upgrading as each new TeXLive distribution comes out. In the recent years I noticed an increase in new packages, updated packages and in one instance a new package bearing a different name replacing an old one by the same package author. A LaTeX document which relies heavily on packages and which has been produced a few years back may start to get some warnings and error messages on present-day LaTeX compilation. The primary reason I switched to LaTeX is because of its reliability and robustness to create big documents easily, not to mention the adorable typographic quality. With LaTeX one doesn't have to worry about how to open a docx in an old program supporting only doc for instance. Now, when there are so much continual changes in the packages in a LaTeX distribution, I tend to wonder when will this madness end. Not that having enhanced and new features are bad in packages, but not all updated packages are backward compatible. Eventually one would like to be able to compile a LaTeX file in 10 years time that he/she is working on at present and not get any compilation warnings/error messages due to some unpredictable behavior of updated packages or due to a package that has been cast-off from a LaTeX distribution. If I understand correctly CTAN do keep a database with all packages from different versions. I would like to know how you LaTeX users handle this issue. Thanks a lot...

    Read the article

  • Is this the right way to write a ProtocolDecoder in MINA?

    - by phpscriptcoder
    public class CustomProtocolDecoder extends CumulativeProtocolDecoder{ byte currentCmd = -1; int currentSize = -1; boolean isFirst = false; @Override protected boolean doDecode(IoSession is, ByteBuffer bb, ProtocolDecoderOutput pdo) throws Exception { if(currentCmd == -1) { currentCmd = bb.get(); currentSize = Packet.getSize(currentCmd); isFirst = true; } while(bb.remaining() > 0) { if(!isFirst) { currentCmd = bb.get(); currentSize = Packet.getSize(currentCmd); } else isFirst = false; //System.err.println(currentCmd + " " + bb.remaining() + " " + currentSize); if(bb.remaining() >= currentSize - 1) { Packet p = PacketDecoder.decodePacket(bb, currentCmd); pdo.write(p); } else { bb.flip(); return false; } } if(bb.remaining() == 0) return true; else return false; } } Anyone see anything wrong with this code? When a lot of packets are received at once, even when only one client is connected, one of them might get cut off at the end (12 bytes instead of 15 bytes, for example) which is obviously bad.

    Read the article

  • Emulating Test::More::done_testing - what is the most idiomatic way?

    - by DVK
    I have to build unit tests for in environment with a very old version of Test::More (perl5.8 with $Test::More::VERSION being '0.80') which predates the addition of done_testing(). Upgrading to newer Test::More is out of the question for practical reasons. And I am trying to avoid using no_tests - it's generally a bad idea not catching when your unit test dies prematurely. What is the most idiomatic way of running a configurable amount of tests, assuming no no_tests or done_testing() is used? Details: My unit tests usually take the form of: use Test::More; my @test_set = ( [ "Test #1", $param1, $param2, ... ] ,[ "Test #1", $param1, $param2, ... ] # ,... ); foreach my $test (@test_set) { run_test($test); } sub run_test { # $expected_tests += count_tests($test); ok(test1($test)) || diag("Test1 failed"); # ... } The standard approach of use Test::More tests => 23; or BEGIN {plan tests => 23} does not work since both are obviously executed before @tests is known. My current approach involves making @tests global and defining it in the BEGIN {} block as follows: use Test::More; BEGIN { our @test_set = (); # Same set of tests as above my $expected_tests = 0; foreach my $test (@tests) { my $expected_tests += count_tests($test); } plan tests = $expected_tests; } our @test_set; # Must do!!! Since first "our" was in BEGIN's scope :( foreach my $test (@test_set) { run_test($test); } # Same sub run_test {} # Same I feel this can be done more idiomatically but not certain how to improve. Chief among the smells is the duplicate our @test_test declarations - in BEGIN{} and after it.

    Read the article

  • How to learn a C++ GUI library effectively?

    - by Chan
    Hello everyone, I have many options for GUI in my head while searching in stackoverflow, but these are what I chose among others: Qt gtkmm GTK+ I used GTK+ couple years ago, and I felt so painful when using C API without string object and containers. I prefer C++ style, I then switched to C++ gtkmm, but the documentation was bad at that time. I found no help when encountering an issue. Now I want to give a hard try for Qt4, but I really want to know how to learn a GUI librarie effectively. With core C++, I usually pick up a problem and try to solve it in different ways using that particular technique, functionality. On the other hand, after skimming through the documentation from Qt site, I don't think this way of studying is applicable, since the GUI classes and APIs are so much bigger. Plus I'm still in school, so I won't have much time to play all the day long with it. How do you guys learn GUI before? Can anyone share some experiences how they learn thing, that would be an invaluable input for me! Best regards, Chan Nguyen

    Read the article

  • WordPress: Using a Where Clause With A Custom Field

    - by Steve Wilkison
    I have a bunch of events that are listed on a particular page. Each event is a post. I need them to display in the order in which they occur, NOT the order of the posting date. So, I've created a custom field called TheDate and enter in the date in this format for each one: 20110306. Then, I wrote my query like this: query_posts( array ( 'cat' => '4', 'posts_per_page' => -1, 'orderby' => 'meta_value_num', 'meta_key' => 'TheDate', 'order' => 'ASC' ) ); Works perfectly and displays the events in the correct order. However, I also want it to ONLY display dates from today onward. I don't want it to display dates which have passed. It seems the way to do this is with a "filter." I tried this, but it doesn't work. $todaysdate = date('Ymd'); query_posts( array ( 'cat' => '4', 'posts_per_page' => -1, 'orderby' => 'meta_value_num', 'meta_key' => 'TheDate', 'order' => 'ASC' ) ); function filter_where( $where = '' ) { $where .= "meta_value_num >= $todaysdate"; return $where; } add_filter( 'posts_where', 'filter_where' ); I figure it's just a matter of where I'm using this filter, I probably have it in the wrong place. Or maybe the filter itself is bad. Any help or guidance would be greatly appreciated. Thanks!

    Read the article

  • Single website multiple connection strings using asp mvc 2 and nhibernate

    - by jjjjj
    Hi In my website i use ASP MVC 2 + Fluent NHibernate as orm, StructureMap for IoC container. There are several databases with identical metadata(and so entities and mappings are the same). On LogOn page user fiils in login, password, rememberme and chooses his server from dropdownlist (in fact he chooses database). Web.config contains all connstrings and we can assume that they won't be changed in run-time. I suppose that it is required to have one session factory per database. Before using multiple databases, i loaded classes to my StructureMap ObjectFactory in Application_Start ObjectFactory.Initialize(init => init.AddRegistry<ObjectRegistry>()); ObjectFactory.Configure(conf => conf.AddRegistry<NhibernateRegistry>()); NhibernateRegistry class: public class NhibernateRegistry : Registry { public NhibernateRegistry() { var sessionFactory = NhibernateConfiguration.Configuration.BuildSessionFactory(); For<Configuration>().Singleton().Use( NhibernateConfiguration.Configuration); For<ISessionFactory>().Singleton().Use(sessionFactory); For<ISession>().HybridHttpOrThreadLocalScoped().Use( ctx => ctx.GetInstance<ISessionFactory>().GetCurrentSession()); } } In Application_BeginRequest i bind opened nhibernate session to asp session(nhibernate session per request) and in EndRequest i unbind them: protected void Application_BeginRequest( object sender, EventArgs e) { CurrentSessionContext.Bind(ObjectFactory.GetInstance<ISessionFactory>().OpenSession()); } Q1: How can i realize what SessionFactory should i use according to authenticated user? is it something like UserData filled with database name (i use simple FormsAuthentication) For logging i use log4net, namely AdoNetAppender which contains connectionString(in xml, of course). Q2: How can i manage multiple connection strings for this database appender, so logs would be written to current database? I have no idea how to do that except changing xml all the time and reseting xml configuration, but its really bad solution.

    Read the article

  • Code Contracts: Do we have to specify Contract.Requires(...) statements redundantly in delegating me

    - by herzmeister der welten
    I'm intending to use the new .NET 4 Code Contracts feature for future development. This made me wonder if we have to specify equivalent Contract.Requires(...) statements redundantly in a chain of methods. I think a code example is worth a thousand words: public bool CrushGodzilla(string weapon, int velocity) { Contract.Requires(weapon != null); // long code return false; } public bool CrushGodzilla(string weapon) { Contract.Requires(weapon != null); // specify contract requirement here // as well??? return this.CrushGodzilla(weapon, int.MaxValue); } For runtime checking it doesn't matter much, as we will eventually always hit the requirement check, and we will get an error if it fails. However, is it considered bad practice when we don't specify the contract requirement here in the second overload again? Also, there will be the feature of compile time checking, and possibly also design time checking of code contracts. It seems it's not yet available for C# in Visual Studio 2010, but I think there are some languages like Spec# that already do. These engines will probably give us hints when we write code to call such a method and our argument currently can or will be null. So I wonder if these engines will always analyze a call stack until they find a method with a contract that is currently not satisfied? Furthermore, here I learned about the difference between Contract.Requires(...) and Contract.Assume(...). I suppose that difference is also to consider in the context of this question then?

    Read the article

  • How to avoid hard coding credentials into Sharepoint webpart?

    - by SeeBees
    I am building a Sharepoint web part that will be used by all users. The web part connects to a web service which needs credentials with higher privileges than common users. I hard coded credentials in the web part's code. query.Credentials = new System.Net.NetworkCredential("username", "password", "domain"); query is an instance of the web service class This may not be a good approach. In regard with security, source code of the web apart is available to people who are not allowed to see the credential. This is bad enough, But is there any other drawback of this approach? A web part doesn't have a .config file associated. The .config file is in application-level of the sharepoint site, and I don't want to modify it for a single webpart. I wonder if there is a webpart-specific way to solve this problem? Say provide a WebBrowsable property to an admin so that he/she can set credentials. Is this possible? Thanks

    Read the article

< Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >