Search Results

Search found 3651 results on 147 pages for 'direct booking'.

Page 14/147 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Twitter OAuth, Error when trying to POST direct message.

    - by Darxval
    So I am building a java script that is used in conjunction of my C++ application for sending direct messages to users. the script does the work of building the request that i send. When i send a request i receive "Incorrect signature" or "can not authenticate you" Does anyone see something i am missing or am doing wrong? I am continuing to investigate. Thank you in advance Javascript: var nDate = new Date(); var epoch = nDate.getTime(); var nounce = ""; nounce = Base64.encode(epoch+randomString()); var Parameters = [ "oauth_consumerkey="+sConsumerKey, "oauth_nonce="+nounce, "oauth_signature_method=HMAC-SHA1", "oauth_timestamp="+epoch, "oauth_token="+sAccessToken, "oauth_version=1.0", "text="+sText, "user="+sUser]; var SortedParameters = Parameters.sort(); var joinParameters = SortedParameters.join("&"); var encodeParameters = escape(joinParameters); signature_base_string = escape("POST&"+NormalizedURL+"&"+encodeParameters); signature_key = sConsumerSecret+"&"+sAccessSecret; signature = Base64.encode(hmacsha1(signature_base_string,signature_key)); sAuthHeader = " OAuth realm=, oauth_nonce="+nounce+", oauth_timestamp="+epoch+", oauth_consumer_key="+sConsumerKey+", oauth_signature_method=HMAC-SHA1, oauth_version=1.0, oauth_signature="+signature+", oauth_token="+sAccessToken+", text="+sText; goNVOut.Set("Header.Authorization: ", sAuthHeader);

    Read the article

  • Why do pure virtual base classes get direct access to static data members while derived instances do

    - by Shamster
    I've created a simple pair of classes. One is pure virtual with a static data member, and the other is derived from the base, as follows: #include <iostream> template <class T> class Base { public: Base (const T _member) { member = _member; } static T member; virtual void Print () const = 0; }; template <class T> T Base<T>::member; template <class T> void Base<T>::Print () const { std::cout << "Base: " << member << std::endl; } template <class T> class Derived : public Base<T> { public: Derived (const T _member) : Base<T>(_member) { } virtual void Print () const { std::cout << "Derived: " << this->member << std::endl; } }; I've found from this relationship that when I need access to the static data member in the base class, I can call it with direct access as if it were a regular, non-static class member. i.e. - the Base::Print() method does not require a this- modifier. However, the derived class does require the this-member indirect access syntax. I don't understand why this is. Both class methods are accessing the same static data, so why does the derived class need further specification? A simple call to test it is: int main () { Derived<double> dd (7.0); dd.Print(); return 0; } which prints the expected "Derived: 7"

    Read the article

  • Is there a way to find out whether a class is a direct base of another class?

    - by user176168
    Hi I'm wondering whether there is a way to find out whether a class is a direct base of another class i.e. in boost type trait terms a is_direct_base_of function. As far as I can see boost doesn't see to support this kind of functionality which leads me to think that its impossible with the current C++ standard. The reason I want it is to do some validation checking on two macro's that are used for a reflection system to specify that one class is derived from another e.g. header.h: #define BASE A #define DERIVED B class A {}; class B : public A { #include <rtti.h> }; rtti.h: // I want to check that the two macro's are correct with a compile time assert Rtti<BASE, DERIVED> m_rtti; Although the macro's seem unnecessary in this simple example in my real world scenario rtti.h is a lot more complex. One possible avenue would be to compare the size of the this pointer with the size of a this pointer cast to the base type and some how trying to figure out whether its the size of the base class itself away or something (yeah your right I don't know how that would work either! lol)

    Read the article

  • How do I introduce row names to a function in R

    - by Tahnoon Pasha
    Hi I have a utility function I've put together to insert rows into a dataframe below. If I was writing out the formula by hand I would put something like newframe=rbind(oldframe[1:rownum,],row_to_insert=row_to_insert,oldframe[(rownum+1:nrow(oldframe),] to name row_to_insert. Could someone tell me how to do this in a function? Thanks insertrows=function (x, y, rownum) { newframe = rbind(y[1:rownum, ], x, y[(rownum + 1):nrow(y), ]) return(data.frame(newframe)) } MWE of some underlying data added below financials=data.frame(sales=c(100,150,200,250),some.direct.costs=c(25,30,35,40),other.direct.costs=c(15,25,25,35),indirect.costs=c(40,45,45,50)) oldframe=t(financials) colnames(oldframe)=make.names(seq(2000,2003,1)) total.direct.costs=oldframe['some.direct.costs',]+oldframe['other.direct.costs',] newframe=total.direct.costs n=rownum=3 oldframe=insertrows(total.direct.costs=newframe,oldframe,n)

    Read the article

  • Do backlinks to blocked content add value?

    - by David Fisher
    We've been debating the following SEO question at our office: If you block bot access to a page either via robots.txt or on-page noindex metadata, does that negate the value of any backlinks to that page? We have a client who wants to block some event booking form pages from being indexed as each booking form page has a unique URL parameter and the pages are "clogging up" the Google index; however lots of websites link to those booking form pages and we wouldn't want to lose the value of those links. Any opinions welcomed.

    Read the article

  • Date Compare Validator Control ASP.NET

    - by Sahanr
    Compare two input dates to avoid invalid dates. In this example I have created two textboxes and namded as "TextBoxSeminarDate" and "TextBoxBookingDeadline". Booking deadline date must be before date to the Seminar date. Therefore I used Operator as "LesThanEqual". I have validated "TextBoxBookingDeadline" value comparing with the "TextBoxSeminarDate" value as follow.   <asp:CompareValidator ID="CompareValidatorBookingDeadline" runat="server" ControlToCompare="TextBoxSeminarDate" ControlToValidate="TextBoxBookingDeadline" Display="Dynamic" ErrorMessage="Please check the seminar date and select appropriate date for booking deadline" Operator="LessThanEqual" Type="Date"  ValueToCompare="<%= TextBoxSeminarDate.Text.ToShortString() %>">*</asp:CompareValidator> The important thing is "ValueToCompare" property of the compare validator. Here I have assined it to the value of the TextboxSeminarDate and then compered it with the booking deadline date.  

    Read the article

  • Nagging As A Strategy For Better Linking: -z guidance

    - by user9154181
    The link-editor (ld) in Solaris 11 has a new feature that we call guidance that is intended to help you build better objects. The basic idea behind guidance is that if (and only if) you request it, the link-editor will issue messages suggesting better options and other changes you might make to your ld command to get better results. You can choose to take the advice, or you can disable specific types of guidance while acting on others. In some ways, this works like an experienced friend leaning over your shoulder and giving you advice — you're free to take it or leave it as you see fit, but you get nudged to do a better job than you might have otherwise. We use guidance to build the core Solaris OS, and it has proven to be useful, both in improving our objects, and in making sure that regressions don't creep back in later. In this article, I'm going to describe the evolution in thinking and design that led to the implementation of the -z guidance option, as well as give a brief description of how it works. The guidance feature issues non-fatal warnings. However, experience shows that once developers get used to ignoring warnings, it is inevitable that real problems will be lost in the noise and ignored or missed. This is why we have a zero tolerance policy against build noise in the core Solaris OS. In order to get maximum benefit from -z guidance while maintaining this policy, I added the -z fatal-warnings option at the same time. Much of the material presented here is adapted from the arc case: PSARC 2010/312 Link-editor guidance The History Of Unfortunate Link-Editor Defaults The Solaris link-editor is one of the oldest Unix commands. It stands to reason that this would be true — in order to write an operating system, you need the ability to compile and link code. The original link-editor (ld) had defaults that made sense at the time. As new features were needed, command line option switches were added to let the user use them, while maintaining backward compatibility for those who didn't. Backward compatibility is always a concern in system design, but is particularly important in the case of the tool chain (compilers, linker, and related tools), since it is a basic building block for the entire system. Over the years, applications have grown in size and complexity. Important concepts like dynamic linking that didn't exist in the original Unix system were invented. Object file formats changed. In the case of System V Release 4 Unix derivatives like Solaris, the ELF (Extensible Linking Format) was adopted. Since then, the ELF system has evolved to provide tools needed to manage today's larger and more complex environments. Features such as lazy loading, and direct bindings have been added. In an ideal world, many of these options would be defaults, with rarely used options that allow the user to turn them off. However, the reality is exactly the reverse: For backward compatibility, these features are all options that must be explicitly turned on by the user. This has led to a situation in which most applications do not take advantage of the many improvements that have been made in linking over the last 20 years. If their code seems to link and run without issue, what motivation does a developer have to read a complex manpage, absorb the information provided, choose the features that matter for their application, and apply them? Experience shows that only the most motivated and diligent programmers will make that effort. We know that most programs would be improved if we could just get you to use the various whizzy features that we provide, but the defaults conspire against us. We have long wanted to do something to make it easier for our users to use the linkers more effectively. There have been many conversations over the years regarding this issue, and how to address it. They always break down along the following lines: Change ld Defaults Since the world would be a better place the newer ld features were the defaults, why not change things to make it so? This idea is simple, elegant, and impossible. Doing so would break a large number of existing applications, including those of ISVs, big customers, and a plethora of existing open source packages. In each case, the owner of that code may choose to follow our lead and fix their code, or they may view it as an invitation to reconsider their commitment to our platform. Backward compatibility, and our installed base of working software, is one of our greatest assets, and not something to be lightly put at risk. Breaking backward compatibility at this level of the system is likely to do more harm than good. But, it sure is tempting. New Link-Editor One might create a new linker command, not called 'ld', leaving the old command as it is. The new one could use the same code as ld, but would offer only modern options, with the proper defaults for features such as direct binding. The resulting link-editor would be a pleasure to use. However, the approach is doomed to niche status. There is a vast pile of exiting code in the world built around the existing ld command, that reaches back to the 1970's. ld use is embedded in large and unknown numbers of makefiles, and is used by name by compilers that execute it. A Unix link-editor that is not named ld will not find a majority audience no matter how good it might be. Finally, a new linker command will eventually cease to be new, and will accumulate its own burden of backward compatibility issues. An Option To Make ld Do The Right Things Automatically This line of reasoning is best summarized by a CR filed in 2005, entitled 6239804 make it easier for ld(1) to do what's best The idea is to have a '-z best' option that unchains ld from its backward compatibility commitment, and allows it to turn on the "best" set of features, as determined by the authors of ld. The specific set of features enabled by -z best would be subject to change over time, as requirements change. This idea is more realistic than the other two, but was never implemented because it has some important issues that we could never answer to our satisfaction: The -z best proposal assumes that the user can turn it on, and trust it to select good options without the user needing to be aware of the options being applied. This is a fallacy. Features such as direct bindings require the user to do some analysis to ensure that the resulting program will still operate properly. A user who is willing to do the work to verify that what -z best does will be OK for their application is capable of turning on those features directly, and therefore gains little added benefit from -z best. The intent is that when a user opts into -z best, that they understand that z best is subject to sometimes incompatible evolution. Experience teaches us that this won't work. People will use this feature, the meaning of -z best will change, code that used to build will fail, and then there will be complaints and demands to retract the change. When (not if) this occurs, we will of course defend our actions, and point at the disclaimer. We'll win some of those debates, and lose others. Ultimately, we'll end up with -z best2 (-z better), or other compromises, and our goal of simplifying the world will have failed. The -z best idea rolls up a set of features that may or may not be related to each other into a unit that must be taken wholesale, or not at all. It could be that only a subset of what it does is compatible with a given application, in which case the user is expected to abandon -z best and instead set the options that apply to their application directly. In doing so, they lose one of the benefits of -z best, that if you use it, future versions of ld may choose a different set of options, and automatically improve the object through the act of rebuilding it. I drew two conclusions from the above history: For a link-editor, backward compatibility is vital. If a given command line linked your application 10 years ago, you have every reason to expect that it will link today, assuming that the libraries you're linking against are still available and compatible with their previous interfaces. For an application of any size or complexity, there is no substitute for the work involved in examining the code and determining which linker options apply and which do not. These options are largely orthogonal to each other, and it can be reasonable not to use any or all of them, depending on the situation, even in modern applications. It is a mistake to tie them together. The idea for -z guidance came from consideration of these points. By decoupling the advice from the act of taking the advice, we can retain the good aspects of -z best while avoiding its pitfalls: -z guidance gives advice, but the decision to take that advice remains with the user who must evaluate its merit and make a decision to take it or not. As such, we are free to change the specific guidance given in future releases of ld, without breaking existing applications. The only fallout from this will be some new warnings in the build output, which can be ignored or dealt with at the user's convenience. It does not couple the various features given into a single "take it or leave it" option, meaning that there will never be a need to offer "-zguidance2", or other such variants as things change over time. Guidance has the potential to be our final word on this subject. The user is given the flexibility to disable specific categories of guidance without losing the benefit of others, including those that might be added to future versions of the system. Although -z fatal-warnings stands on its own as a useful feature, it is of particular interest in combination with -z guidance. Used together, the guidance turns from advice to hard requirement: The user must either make the suggested change, or explicitly reject the advice by specifying a guidance exception token, in order to get a build. This is valuable in environments with high coding standards. ld Command Line Options The guidance effort resulted in new link-editor options for guidance and for turning warnings into fatal errors. Before I reproduce that text here, I'd like to highlight the strategic decisions embedded in the guidance feature: In order to get guidance, you have to opt in. We hope you will opt in, and believe you'll get better objects if you do, but our default mode of operation will continue as it always has, with full backward compatibility, and without judgement. Guidance suggestions always offers specific advice, and not vague generalizations. You can disable some guidance without turning off the entire feature. When you get guidance warnings, you can choose to take the advice, or you can specify a keyword to disable guidance for just that category. This allows you to get guidance for things that are useful to you, without being bothered about things that you've already considered and dismissed. As the world changes, we will add new guidance to steer you in the right direction. All such new guidance will come with a keyword that let's you turn it off. In order to facilitate building your code on different versions of Solaris, we quietly ignore any guidance keywords we don't recognize, assuming that they are intended for newer versions of the link-editor. If you want to see what guidance tokens ld does and does not recognize on your system, you can use the ld debugging feature as follows: % ld -Dargs -z guidance=foo,nodefs debug: debug: Solaris Linkers: 5.11-1.2275 debug: debug: arg[1] option=-D: option-argument: args debug: arg[2] option=-z: option-argument: guidance=foo,nodefs debug: warning: unrecognized -z guidance item: foo The -z fatal-warning option is straightforward, and generally useful in environments with strict coding standards. Note that the GNU ld already had this feature, and we accept their option names as synonyms: -z fatal-warnings | nofatal-warnings --fatal-warnings | --no-fatal-warnings The -z fatal-warnings and the --fatal-warnings option cause the link-editor to treat warnings as fatal errors. The -z nofatal-warnings and the --no-fatal-warnings option cause the link-editor to treat warnings as non-fatal. This is the default behavior. The -z guidance option is defined as follows: -z guidance[=item1,item2,...] Provide guidance messages to suggest ld options that can improve the quality of the resulting object, or which are otherwise considered to be beneficial. The specific guidance offered is subject to change over time as the system evolves. Obsolete guidance offered by older versions of ld may be dropped in new versions. Similarly, new guidance may be added to new versions of ld. Guidance therefore always represents current best practices. It is possible to enable guidance, while preventing specific guidance messages, by providing a list of item tokens, representing the class of guidance to be suppressed. In this way, unwanted advice can be suppressed without losing the benefit of other guidance. Unrecognized item tokens are quietly ignored by ld, allowing a given ld command line to be executed on a variety of older or newer versions of Solaris. The guidance offered by the current version of ld, and the item tokens used to disable these messages, are as follows. Specify Required Dependencies Dynamic executables and shared objects should explicitly define all of the dependencies they require. Guidance recommends the use of the -z defs option, should any symbol references remain unsatisfied when building dynamic objects. This guidance can be disabled with -z guidance=nodefs. Do Not Specify Non-Required Dependencies Dynamic executables and shared objects should not define any dependencies that do not satisfy the symbol references made by the dynamic object. Guidance recommends that unused dependencies be removed. This guidance can be disabled with -z guidance=nounused. Lazy Loading Dependencies should be identified for lazy loading. Guidance recommends the use of the -z lazyload option should any dependency be processed before either a -z lazyload or -z nolazyload option is encountered. This guidance can be disabled with -z guidance=nolazyload. Direct Bindings Dependencies should be referenced with direct bindings. Guidance recommends the use of the -B direct, or -z direct options should any dependency be processed before either of these options, or the -z nodirect option is encountered. This guidance can be disabled with -z guidance=nodirect. Pure Text Segment Dynamic objects should not contain relocations to non-writable, allocable sections. Guidance recommends compiling objects with Position Independent Code (PIC) should any relocations against the text segment remain, and neither the -z textwarn or -z textoff options are encountered. This guidance can be disabled with -z guidance=notext. Mapfile Syntax All mapfiles should use the version 2 mapfile syntax. Guidance recommends the use of the version 2 syntax should any mapfiles be encountered that use the version 1 syntax. This guidance can be disabled with -z guidance=nomapfile. Library Search Path Inappropriate dependencies that are encountered by ld are quietly ignored. For example, a 32-bit dependency that is encountered when generating a 64-bit object is ignored. These dependencies can result from incorrect search path settings, such as supplying an incorrect -L option. Although benign, this dependency processing is wasteful, and might hide a build problem that should be solved. Guidance recommends the removal of any inappropriate dependencies. This guidance can be disabled with -z guidance=nolibpath. In addition, -z guidance=noall can be used to entirely disable the guidance feature. See Chapter 7, Link-Editor Quick Reference, in the Linker and Libraries Guide for more information on guidance and advice for building better objects. Example The following example demonstrates how the guidance feature is intended to work. We will build a shared object that has a variety of shortcomings: Does not specify all it's dependencies Specifies dependencies it does not use Does not use direct bindings Uses a version 1 mapfile Contains relocations to the readonly allocable text (not PIC) This scenario is sadly very common — many shared objects have one or more of these issues. % cat hello.c #include <stdio.h> #include <unistd.h> void hello(void) { printf("hello user %d\n", getpid()); } % cat mapfile.v1 # This version 1 mapfile will trigger a guidance message % cc hello.c -o hello.so -G -M mapfile.v1 -lelf As you can see, the operation completes without error, resulting in a usable object. However, turning on guidance reveals a number of things that could be better: % cc hello.c -o hello.so -G -M mapfile.v1 -lelf -zguidance ld: guidance: version 2 mapfile syntax recommended: mapfile.v1 ld: guidance: -z lazyload option recommended before first dependency ld: guidance: -B direct or -z direct option recommended before first dependency Undefined first referenced symbol in file getpid hello.o (symbol belongs to implicit dependency /lib/libc.so.1) printf hello.o (symbol belongs to implicit dependency /lib/libc.so.1) ld: warning: symbol referencing errors ld: guidance: -z defs option recommended for shared objects ld: guidance: removal of unused dependency recommended: libelf.so.1 warning: Text relocation remains referenced against symbol offset in file .rodata1 (section) 0xa hello.o getpid 0x4 hello.o printf 0xf hello.o ld: guidance: position independent (PIC) code recommended for shared objects ld: guidance: see ld(1) -z guidance for more information Given the explicit advice in the above guidance messages, it is relatively easy to modify the example to do the right things: % cat mapfile.v2 # This version 2 mapfile will not trigger a guidance message $mapfile_version 2 % cc hello.c -o hello.so -Kpic -G -Bdirect -M mapfile.v2 -lc -zguidance There are situations in which the guidance does not fit the object being built. For instance, you want to build an object without direct bindings: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance ld: guidance: -B direct or -z direct option recommended before first dependency ld: guidance: see ld(1) -z guidance for more information It is easy to disable that specific guidance warning without losing the overall benefit from allowing the remainder of the guidance feature to operate: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance=nodirect Conclusions The linking guidelines enforced by the ld guidance feature correspond rather directly to our standards for building the core Solaris OS. I'm sure that comes as no surprise. It only makes sense that we would want to build our own product as well as we know how. Solaris is usually the first significant test for any new linker feature. We now enable guidance by default for all builds, and the effect has been very positive. Guidance helps us find suboptimal objects more quickly. Programmers get concrete advice for what to change instead of vague generalities. Even in the cases where we override the guidance, the makefile rules to do so serve as documentation of the fact. Deciding to use guidance is likely to cause some up front work for most code, as it forces you to consider using new features such as direct bindings. Such investigation is worthwhile, but does not come for free. However, the guidance suggestions offer a structured and straightforward way to tackle modernizing your objects, and once that work is done, for keeping them that way. The investment is often worth it, and will replay you in terms of better performance and fewer problems. I hope that you find guidance to be as useful as we have.

    Read the article

  • Issues printing through ssh tunnel and port forwarding

    - by simogasp
    I'm having some problems trying to print through a ssh tunnel. I'd like to print from my laptop to a network printer (Toshiba es453, for what matters) which is in a local network. I can reach the local network using a gateway. So far I did the following: ssh -N -L19100:<Printer_IP>:9100 <username>@<ssh_gateway> Basically i just mapped the port 19100 of my laptop directly to the input port of the printer, passing through the gateway. So far, so good. Then, i tried to install on my laptop a new printer with the GUI config tool of ubuntu, so that the new printer is on localhost at port 19100 (as APP Socket/HP Jet Direct) , then I provided the proper driver of the printer. In theory, once the tunnel is open I should be able to print from any program just selecting this printer. Of course, it does not work. :-) The document hangs in the queue with status Processing while in the shell where I set up the tunnel I get these errors on failing opening channels debug1: Local forwarding listening on ::1 port 19100. debug1: channel 0: new [port listener] debug1: Local forwarding listening on 127.0.0.1 port 19100. debug1: channel 1: new [port listener] debug1: Requesting [email protected] debug1: Entering interactive session. debug1: Connection to port 19100 forwarding to 195.220.21.227 port 9100 requested. debug1: channel 2: new [direct-tcpip] debug1: Connection to port 19100 forwarding to 195.220.21.227 port 9100 requested. debug1: channel 3: new [direct-tcpip] channel 2: open failed: connect failed: Connection timed out debug1: channel 2: free: direct-tcpip: listening port 19100 for 195.220.21.227 port 9100, connect from ::1 port 44434, nchannels 4 debug1: Connection to port 19100 forwarding to 195.220.21.227 port 9100 requested. debug1: channel 2: new [direct-tcpip] channel 3: open failed: connect failed: Connection timed out debug1: channel 3: free: direct-tcpip: listening port 19100 for 195.220.21.227 port 9100, connect from ::1 port 44443, nchannels 4 channel 2: open failed: connect failed: Connection timed out debug1: channel 2: free: direct-tcpip: listening port 19100 for 195.220.21.227 port 9100, connect from ::1 port 44493, nchannels 3 debug1: Connection to port 19100 forwarding to 195.220.21.227 port 9100 requested. debug1: channel 2: new [direct-tcpip] As a further debugging test I tried the following. From a machine inside the local network I did a telnet <IP_printer> 9100, got access, wrote some random thing, closed the connection and correctly I got a print of what I had written. So the port and the ip of the printer should be correct. I tried the same from my laptop with the tunnel opened, the telnet succeeded but, again, the printer didn't print anything, getting the usual channel x: open failed: errors. I'm not a great expert on the matter, I just thought that in theory it was possible to do something like that, but maybe there is something that I didn't consider or I did wrong. Any clue? Thanks! Simone [update] As further debugging test, I tried to replicate the procedure from a machine in the local network. From that machine, I did ssh -N -L19100:<IP_printer>:9100 <username>@<ssh_gateway> (note that now the machine, the gateway and the printer are in the same local network) then I tried again the telnet test with telnet localhost 19100, I got access and everything, but I didn't get the print but the usual error channel 2: open failed: connect failed: Connection timed out Maybe I am missing some other connection to be forwarded or maybe this is not allowed by the administrators. Of course, if I connect via ssh tunneling to the local machine from my laptop through the gateway, I can successfully print using the lpr command (from the local machine). But this is what I would like to avoid (yes, I'm lazy...:-), I would like to have a more 'elegant' and transparent way to do that.

    Read the article

  • MySQL - Finding time overlaps

    - by Jude
    Hi, I have 2 tables in the database with the following attributes: Booking ======= booking_id booking_start booking_end resource_booked =============== booking_id resource_id The second table is an associative entity between "Booking" and "Resource" (i.e., 1 booking can contain many resources). Attributes booking_start and booking_end are timestamps with date and time in it. May I know how I might be able to find out for each resource_id (resource_booked) if the date/time overlaps or clashes with other bookings of similar resource_id? I was doodling the answer on paper, pictorially, to see if it might help me visualize how I could solve this and I got this: Joining the 2 tables (Booking, Booked_resource) into one table with the 4 attributes needed. Follow the answer suggested here : http://stackoverflow.com/questions/689458/find-overlapping-date-time-rows-within-one-table I did step 1 but step 2 is leaving me baffled! I would really appreciate any help on this! Thanks!

    Read the article

  • How can I direct edge to get out of the diamond on the right?

    - by Didier Trosset
    I have a simple dot diagram to show how to perform tests. PerformTests; PerformTests<---+ PerformTests -> TestsPassed; | | TestsPassed [shape="diamond"]; v | TestsPassed -> Release [label="Yes"]; TestsPassed | TestsPassed -> FixErrors [label="No"]; Y| N\ | FixErrors -> PerformTests; v FixErrors Release The diagram shows square boxes for all nodes, except TestPassed that has a diamond shape. My issue is here. I'd like the edge that goes outside of the diamond for No to be getting out of the diamond at the right (east) instead of oblique down-right (south-east). What I have What I want ^ ^ / \ / \ < > < >---> \ /\ \ / v \ v I've seen such compass_pt in the dot grammar, but cannot figure out how to use it. I what I want possible, and how to do it?

    Read the article

  • Local variable assign versus direct assign; properties and memory.

    - by Typeoneerror
    In objective-c I see a lot of sample code where the author assigns a local variable, assigns it to a property, then releases the local variable. Is there a practical reason for doing this? I've been just assigning directly to the property for the most part. Would that cause a memory leak in any way? I guess I'd like to know if there's any difference between this: HomeScreenBtns *localHomeScreenBtns = [[HomeScreenBtns alloc] init]; self.homeScreenBtns = localHomeScreenBtns; [localHomeScreenBtns release]; and this: self.homeScreenBtns = [[HomeScreenBtns alloc] init]; Assuming that homeScreenBtns is a property like so: @property (nonatomic, retain) HomeScreenBtns *homeScreenBtns; I'm getting ready to submit my application to the app store so I'm in full optimize/QA mode.

    Read the article

  • Flash content works on direct link, but not when inserted into html?

    - by Zolomon
    How come http://www.zolomon.com/wptj/wp-content/themes/default/polaroid.swf works perfectly but not when implemented at http://www.zolomon.com/wptj/?page_id=8 ? The code I use to insert the .swf-file is the following: <object width="522" height="490" id="polaroid" align="middle"> <param name="allowScriptAccess" value="sameDomain" /> <param name="allowFullScreen" value="false" /> <param name="movie" value="polaroid.swf" /> <param name="menu" value="false" /> <param name="quality" value="high" /> <param name="scale" value="noscale" /> <param name="bgcolor" value="#DFCEAF" /> <embed src="wp-content/themes/default/polaroid.swf" menu="false" quality="high" scale="noscale" wmode="transparent" bgcolor="#ffffff" width="522" height="490" name="polaroid" align="middle" allowScriptAccess="sameDomain" allowFullScreen="false" type="application/x-shockwave-flash" pluginspage="http://www.adobe.com/go/getflashplayer" /> </object>

    Read the article

  • Why Enumerable.Range is faster than a direct yield loop?

    - by Morgan Cheng
    Below code is checking performance of three different ways to do same solution. public static void Main(string[] args) { // for loop { Stopwatch sw = Stopwatch.StartNew(); int accumulator = 0; for (int i = 1; i <= 100000000; ++i) { accumulator += i; } sw.Stop(); Console.WriteLine("time = {0}; result = {1}", sw.ElapsedMilliseconds, accumulator); } //Enumerable.Range { Stopwatch sw = Stopwatch.StartNew(); var ret = Enumerable.Range(1, 100000000).Aggregate(0, (accumulator, n) => accumulator + n); sw.Stop(); Console.WriteLine("time = {0}; result = {1}", sw.ElapsedMilliseconds, ret); } //self-made IEnumerable<int> { Stopwatch sw = Stopwatch.StartNew(); var ret = GetIntRange(1, 100000000).Aggregate(0, (accumulator, n) => accumulator + n); sw.Stop(); Console.WriteLine("time = {0}; result = {1}", sw.ElapsedMilliseconds, ret); } } private static IEnumerable<int> GetIntRange(int start, int count) { int end = start + count; for (int i = start; i < end; ++i) { yield return i; } } } The result is like this: time = 306; result = 987459712 time = 1301; result = 987459712 time = 2860; result = 987459712 It is not surprising that "for loop" is faster than the other two solutions, because Enumerable.Aggregate takes more method invocations. However, it really surprises that "Enumerable.Range" is faster than the "self-made IEnumerable". I thought that Enumerable.Range will take more overhead than the simple GetIntRange method. What is the possible reason for this?

    Read the article

  • How do I get my Flash CS4 buttons to direct the user to another URL?

    - by goldenfeelings
    My apologies if this has been fully addressed before, but I've read through several other threads and still can't seem to get my file to work. My actionscript code is at the bottom of this message. I created it using instructions from the Adobe website: http://help.adobe.com/en_US/ActionScript/3.0_ProgrammingAS3/WS5b3ccc516d4fbf351e63e3d118a9b90204-7fd7.html I believe I have all of my objects set to the correct type of symbol (button) and all of my instances are named appropriately (see screenshot here: www.footprintsfamilyphoto.com/wp-content/themes/Footprints/images/flash_buttonissue.jpg) Action Script here. Let me know if you have suggestions! (Note: I am very new to Flash): stop (); function families(event:MouseEvent):void { var targetURL:URLRequest = new URLRequest("http://www.footprintsfamilyphoto.com/portfolio/families"); navigateToURL(targetURL); } bc_btn1.addEventListener(MouseEvent.CLICK, families); bc_btn2.addEventListener(MouseEvent.CLICK, families); function families(event:MouseEvent):void { var targetURL:URLRequest = new URLRequest("http://www.footprintsfamilyphoto.com/portfolio/families"); navigateToURL(targetURL); } f_btn1.addEventListener(MouseEvent.CLICK, families); f_btn2.addEventListener(MouseEvent.CLICK, families); function families(event:MouseEvent):void { var targetURL:URLRequest = new URLRequest("http://www.footprintsfamilyphoto.com/portfolio/families"); navigateToURL(targetURL); } cw_btn1.addEventListener(MouseEvent.CLICK, families); cw_btn2.addEventListener(MouseEvent.CLICK, families);

    Read the article

  • How do you specify a direct build name directory using TFS build of an ASP.net web application?

    - by Mark Kadlec
    I have a TFS build set up to deploy an ASP.net project to a test server. The build works great, and deploys to the test server fine, but instead of putting it into the \Website directory that my IIS webserver is configured for, it puts the build into \Website_20100511.6 Why is the date suffixed to the directory name? Is there a way to turn that off so I can publish directly to the \Website?

    Read the article

  • Why does my Spring Controller direct me to the wrong page?

    - by kc2001
    I am writing my first Spring 3.0.5 MVC app and am confused about why my controller mappings aren't doing what I expect. I have a VerifyPasswordController that is called after a user tries to log in by entering his name and password. // Called upon clicking "submit" from /login @RequestMapping(value = "/verifyPassword", method = RequestMethod.POST) @ModelAttribute("user") public String verifyPassword(User user, BindingResult result) { String email = user.getEmail(); String nextPage = CHOOSE_OPERATION_PAGE; // success case if (result.hasErrors()) { nextPage = LOGIN_PAGE; } else if (!passwordMatches(email, user.getPassword())) { nextPage = LOGIN_FAILURE_PAGE; } else { // success } return nextPage; } I can verify in the debugger that this method is being called, but afterwards, the verifyPassword page is displayed rather than the chooseOperation page. The console output of WebLogic seems to show that my mapping are correct: INFO : org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping - Mapped URL path [/chooseOperation] onto handler 'chooseOperationController' INFO : org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping - Mapped URL path [/chooseOperation.*] onto handler 'chooseOperationController' INFO : org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping - Mapped URL path [/chooseOperation/] onto handler 'chooseOperationController' Here is the ChooseOperationController: @Controller @SessionAttributes("leaveRequestForm") public class ChooseOperationController implements PageIfc, AttributeIfc { @RequestMapping(value = "/chooseOperation") @ModelAttribute("leaveRequestForm") public LeaveRequest setUpLeaveRequestForm( @RequestParam(NAME_ATTRIBUTE) String name) { LeaveRequest form = populateFormFromDatabase(name); return form; } // helper methods omited } I welcome any advice, particularly "generic" techniques for debugging such mapping problems. BTW, I've also tried to "redirect" to the desired page, but got the same result. servlet-context.xml: <?xml version="1.0" encoding="UTF-8"?> <beans:beans xmlns="http://www.springframework.org/schema/mvc" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:beans="http://www.springframework.org/schema/beans" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation=" http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc-3.0.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd"> <!-- DispatcherServlet Context: defines this servlet's request-processing infrastructure --> <!-- Enables the Spring MVC @Controller programming model --> <annotation-driven /> <!-- Handles HTTP GET requests for /resources/** by efficiently serving up static resources in the ${webappRoot}/resources directory --> <resources mapping="/resources/**" location="/resources/" /> <!-- Resolves views selected for rendering by @Controllers to .jsp resources in the /WEB-INF/views directory --> <beans:bean class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <beans:property name="prefix" value="/WEB-INF/views/" /> <beans:property name="suffix" value=".jsp" /> </beans:bean> <context:component-scan base-package="com.engilitycorp.leavetracker" /> <beans:bean id="leaveRequestForm" class="com.engilitycorp.leavetracker.model.LeaveRequest" /> </beans:beans> The constants: String LOGIN_FAILURE_PAGE = "loginFailure"; String LOGIN_PAGE = "login"; String CHOOSE_OPERATION_PAGE = "chooseOperation";

    Read the article

  • Is ASP.net MVC a direct copy of Ruby on Rails concepts?

    - by Greg
    Hi, I'm been developing Ruby on Rails previously. I'm now looking at an ASP.net web app and I'm looking at WebForms and MVC. As I look at MVC it feels as if I'm looking at the result of something a Ruby on Rails developer implemented after being forced to work in MS land. So I'm wondering: Was MVC more or less taken directly from Ruby on Rails and it's concepts? (either intentionally or unintentionally)

    Read the article

  • databind a DropDownList control with a list of all sub directories that exist in a particular direct

    - by sushant
    I am wanting to databind a DropDownList control with a list of all sub directories that exist in a particular directory on the server. The directory I want to search is in the root of the application. I am fairly new to programming and I'm not sure where to even start. I found this code on a website: Dim root As String = "C;\" Dim folders() As String = Directory.GetDirectories(root) Dim sb As New StringBuilder(2048) Dim f As String For Each f In folders Dim foldername As String = Path.GetFileName(f) sb.Append("<option>") sb.Append(foldername) sb.Append("</option>") Next Label3.Text = "<select runat=""sever"" id=""folderlist""" & sb.ToString() & "</select>" I guess this is vb. but my tool is in asp, so is their something similar in vbscript so that I can use it.

    Read the article

  • redirection follow by post

    - by lucas
    hi, just wonder how those air ticket booking website redirect the user to the airline booking website and then fill up(i suppose doing POST) the required information so that the users will land on the booking page with origin/destination/date selected? Is the technique used is to open up new browser window and do a ajax POST from there? Thanks.

    Read the article

  • Is it a good practice to perform direct database access in the code-behind of an ASP.NET page?

    - by patricks418
    Hi, I am an experienced developer but I am new to web application development. Now I am in charge of developing a new web application and I could really use some input from experienced web developers out there. I'd like to understand exactly what experienced web developers do in the code-behind pages. At first I thought it was best to have a rule that all the database access and business logic should be performed in classes external to the code-behind pages. My thought was that only logic necessary for the web form would be performed in the code-behind. I still think that all the business logic should be performed in other classes but I'm beginning to think it would be alright if the code-behind had access to the database to query it directly rather than having to call other classes to receive a dataset or collection back. Any input would be appreciated.

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >