Search Results

Search found 1855 results on 75 pages for 'weak linking'.

Page 70/75 | < Previous Page | 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • Tridion Installation

    - by Kevin Brydon
    I am currently upgrading an installation of Tridion from 5.3 to 2011 starting almost from scratch (aside from migrating the database), brand new virtual servers. I just want to ask for some advice on my current server setup... a sanity check. All servers are running Windows Server 2008. The pages on our website are all classic ASP. Database SQL Server cluster. The 5.3 database has been migrated using the DatabaseManager. This is pretty standard and works well (in test anyway). Content Manager A single server to run the Content Manager and the Publisher. There are around 10 people using it at any one time so not under a particularly heavy load. Content Data Store Filesystem located somewhere on the network. One directory for live and one for staging. Content Delivery Two servers (cd1 and cd2) each with the the following server roles installed. cd1 writes to a filesystem content data store for the live website, cd2 writes to the content data store for the staging website. Presentation Two public facing web servers (web1 and web2) serving both the live and staging websites. The web servers read directly from the content data store as its a filesystem. Each of the web servers have the Content Delivery Server installed so that I can use dynamic linking (and other features?). I've so far set up everything but the web servers. Any thoughts? edit Thanks to Ram S who linked me to a decent walkthrough, upvoted. I suppose I should have posed some questions as I didn't really ask a question. I guess I'm a little confused over the content deliver aspect. I have the Content Delivery split in two separate parts. cd1 and cd2 do the work of shifting information from the Content Manager to the Staging/Live web directories. web1 and web2 should do the work of serving the web pages to the outside world and will interact with the content data store (file system). Is this a correct setup? I need some parts of the Content Delivery on my web servers right? Theoretically I could get rid of the cd1 and cd2 servers and use web1 and web2 to do the deployment right? But I suspect this will put the web servers under unnecessary strain should there ever be a big publish. I've been reading the 2011 Installation Manual, Content Delivery section, and I'm finding it quite hard to get my head around!

    Read the article

  • jQuery Ajax (beforeSend and complete) working properly on FireFox but not on IE8 and Chrome

    - by Farhan Zia
    I am using jQuery ajax version 1.4.1 in my MVC application (though the issue I am discussing was same with the old jQuery version 3.2.1) as well, to check during customer registration if the username is already registered. As the user clicks on the "Check Availibility" button, I am showing a busy image in place of the check button (actually hiding the check button and showing the image) while checking the availibility on the server and then displaying a message. It is a Sync call (async: false) and I used beforeSend: and complete: to show and hide the busy image and the check button. This thing is working well on Firefox but in IE 8 and Chrome, neither the busy image appear nor the check button hides rather the check button remained pressed as the whole thing has hanged. The available and not available messages appear correctly though. Below is the code: HTML in a User Control (ascx): (i have replaced the angular braces with square below) [div id="available"]This Username is Available [div id="not_available"]This Username is not available [input id="txtUsername" name="txtUsername" type="text" size="50" /]  [button id="check" name="check" type="button"]Check Availability[/button] [img id="busy" src="/Content/Images/busy.gif" /] On the top of this user control, I am linking an external javascript file that has the following code: $(document).ready(function() { $('img#busy').hide(); $('div#available').hide(); $('div#not_available').hide(); $("button#check").click(function() { var available = checkUsername($("input#txtUsername").val()); if (available == "1") { $("div#available").show(); $("div#not_available").hide(); } else { $("div#available").hide(); $("div#not_available").show(); } }); }); function checkUsername(username) { $.ajax({ type: "POST", url: "/SomeController/SomeAction", data: { "id": username }, timeout: 3000, async: false, beforeSend: function() { $("button#check").hide(); $("img#busy").show(); }, complete: function() { $("button#check").show(); $("img#busy").hide(); }, cache: false, success: function(result) { return result; }, error: function(error) { $("img#busy").hide(); $("button#check").show(); alert("Some problems have occured. Please try again later: " + error); } }); }

    Read the article

  • trying to build Boost MPI, but the lib files are not created. What's going on?

    - by unknownthreat
    I am trying to run a program with Boost MPI, but the thing is I don't have the .lib. So I try to create one by following the instruction at http://www.boost.org/doc/libs/1_43_0/doc/html/mpi/getting_started.html#mpi.config The instruction says "For many users using LAM/MPI, MPICH, or OpenMPI, configuration is almost automatic", I got myself OpenMPI in C:\, but I didn't do anything more with it. Do we need to do anything with it? Beside that, another statement from the instruction: "If you don't already have a file user-config.jam in your home directory, copy tools/build/v2/user-config.jam there." Well, I simply do what it says. I got myself "user-config.jam" in C:\boost_1_43_0 along with "using mpi ;" into the file. Next, this is what I've done: bjam --with-mpi C:\boost_1_43_0>bjam --with-mpi WARNING: No python installation configured and autoconfiguration failed. See http://www.boost.org/libs/python/doc/building.html for configuration instructions or pass --without-python to suppress this message and silently skip all Boost.Python targets Building the Boost C++ Libraries. warning: skipping optional Message Passing Interface (MPI) library. note: to enable MPI support, add "using mpi ;" to user-config.jam. note: to suppress this message, pass "--without-mpi" to bjam. note: otherwise, you can safely ignore this message. warning: Unable to construct ./stage-unversioned warning: Unable to construct ./stage-unversioned Component configuration: - date_time : not building - filesystem : not building - graph : not building - graph_parallel : not building - iostreams : not building - math : not building - mpi : building - program_options : not building - python : not building - random : not building - regex : not building - serialization : not building - signals : not building - system : not building - test : not building - thread : not building - wave : not building ...found 1 target... The Boost C++ Libraries were successfully built! The following directory should be added to compiler include paths: C:\boost_1_43_0 The following directory should be added to linker library paths: C:\boost_1_43_0\stage\lib C:\boost_1_43_0> I see that there are many libs in C:\boost_1_43_0\stage\lib, but I see no trace of libboost_mpi-vc100-mt-1_43.lib or libboost_mpi-vc100-mt-gd-1_43.lib at all. These are the libraries required for linking in mpi applications. What could possibly gone wrong when libraries are not being built?

    Read the article

  • Dumb IE6 resize behaviour - hope it rings some bells with someone

    - by Ollie2893
    Hi, I'm having no end of fun (sic) with jQuery.tabs. The widget is quite crafty in that it turns basic HTML like so <div> <ul> <li>Tab #1</li> ... </ul> <div for panel #1> </div> <div for panel #2> </div> ... </div> into a cute tabbed dialogue. (It does so by restyling the UL and then toggling the "display" attribute for the panel DIVs to show/not show whatever panel is selected.) Now I found that I can spare myself a lot of trouble in my JS project if I insert a scrollable IFRAME into each panel. One usability problem I'm trying to ameliorate is that when the tabbed panel becomes larger than the browser's window, then the user ends up with too many scrollbars. I am trying to avoid this situation by linking the size of the tabbed panel to that of $(window). That is, I trap and process the resize event on $(window). To make my life bearable, all components are relatively sized. This is also true, in particular, of the IFRAMEs (100% width, 100% height). The only exception are the panel DIVs, which are of fixed height (in px). And this is the only dimension css attribute that I manipulate during my resize action. All of this works a treat in FF and Chrome, but IE6 is doing something rather cute: So long as I do not affect the width of the browser window (but only change its height), only the panel DIV changes in height; the IFRAME contained will not change. As a result of this behaviour, it is not possible to shorten the tabbed panel below the height of the IFRAME. I can lengthen the DIV, yes. But the IFRAME will not fill the panel in that case. All becomes good the moment I make the slightest change to the width of the browser window. In that moment, the IFRAME expands to catch up with the extended DIV or DIV and IFRAME contract in tandem. Bizarre. I inserted useless CSS instructions like "position: relative" and "zoom: 1". Also nudged the display with "display: block". No joy so far. Any ideas? Thanks.

    Read the article

  • Which third party website thumbnailing services do you use?

    - by Ben Delarre
    I've got a requirement for showing thumbnails of arbitrary websites. I need to be able to show small thumbnails (120px by 90px), and larger thumbnails of around 480px wide. I'll need to specify the queue and invalid placeholder images and preferably have a pingback when the queued images are processed so I can respond appropriately. I'd also need a simple API I can use either directly embedded in my HTML, or from a simple web request to queue the images. I've been looking at various services ranging from low-fi services, to large scale ones - here's some examples: www.bitpixels.com Uses Google AppEngine, seems like a prototype or a toy. Free! www.websnapr.com Tried using this, made a free account and requested a thumbnail. Waited a few minutes and refreshed a couple of times, and ended up having the account banned. Free is tricky yes, but if I can't try it out successfully I'm disinclined to pay. www.shrinktheweb.com Free account seems to be very quick. Lots of documentation on the site, and even covers local caching of the images to your own server (documentation mostly in PHP). Quality of thumbnails look good, and there appear to be sufficient options for setting thumbnail placeholder images and parameters for altering how the thumbnailing is done. Also supports large 'screenshots' of URLs - very useful for me. Discovered the PRO pricing is an à la carte menu, allowing me to select just the features I want and keep the monthly cost low. Excellent stuff, have decided to use this service. www.thumbalizr.com Good coverage of thumbnail sizes and control options - even allowing specification for browser width when thumbnailing. No ping-back, but I can live without that. Supports local caching of images with PHP API, would prefer .NET, but can port it if necessary. Looks like a fairly professional service but seems fairly expensive for the number of thumbnails you get to generate. apologies for lack of proper linking - spam protection! I'm not entirely convinced by any of them, and since this will be a long term service I'd like some stability and support. I'm willing to pay for the service, but I'd want something that fulfills most if not all of my requirements for that. I should also mention that we're hosted on Windows under IIS, so local solutions involving Xvfb and the like sadly can't be used for this project. So my question is: what services do you use? How have they panned out, are you happy with them?

    Read the article

  • jquery addresses and live method

    - by Jay
    //deep linking $.fn.ajaxAnim = function() { $(this).animW(); $(this).html('<div class="load-prog">loading...</div>'); } $("document").ready(function(){ contM = $('#main-content'); contS = $('#second-content'); $(contM).hide(); $(contM).addClass('hidden'); $(contS).hide(); $(contS).addClass('hidden'); function loadURL(URL) { //console.log("loadURL: " + URL); $.ajax({ url: URL, beforeSend: function(){$(contM).ajaxAnim();}, type: "POST", dataType: 'html', data: {post_loader: 1}, success: function(data){ $(contM).html(data); $('.post-content').initializeScroll(); } }); } // Event handlers $.address.init(function(event) { //console.log("init: " + $('[rel=address:' + event.value + ']').attr('href')); }).change(function(event) { evVal = event.value; if(evVal == '/'){return false;} else{ $.ajax({ url: $('[rel=address:' + evVal + ']').attr('href'), beforeSend: function(){$(contM).ajaxAnim();}, type: "POST", dataType: 'html', data: {post_loader: 1}, success: function(data){ $(contM).html(data); $('.post-content').initializeScroll(); }}); } //console.log("change"); }) $('.update-main a, a.update-main').live('click', function(){ loadURL($(this).attr('href')); return false; }); $(".update-second a, a.update-second").live('click', function() { var link = $(this); $.ajax({ url: link.attr("href"), beforeSend: function(){$(contS).ajaxAnim();}, type: "POST", dataType: 'html', data: {post_loader: 1}, success: function(data){ $(contS).html(data); $('.post-content').initializeScroll(); }}); return false; }); }); I'm using jquery addresses to update content while maintaining a useful url. When clicking on links in a main nav, the url is updated properly, but when links are loaded dynamically with ajax, the url address function breaks. I have made 'click' events live, allowing for content to be loaded via dynamically loaded links, but I can't seem to make the address event listener live, but this seems to be the only way to make this work. Is my syntax wrong if I change this : $.address.change(function(event) { to this: $.address.live('change', function(event) { or does the live method not work with this plugin?

    Read the article

  • Simple JQuery Validator addMethod not working

    - by tehaaron
    Updated question on the bottom I am trying to validate a super simple form. Eventually the username will be compared to a RegExp statement and the same will go for the password. However right now I am just trying to learn the Validator addMethod format. I currently have this script: JQuery.validator.addMethod( "legalName", function(value, element) { if (element.value == "bob") { return false; } else return true; }, "Use a valid username." ); $(document).ready(function() { $("#form1").validate({ rules: { username: { legalName: true } }, }); }); Which if I am not mistaken should return false and respond with "Use a valid username." if I were to put "bob" into the form. However, it is simply submitting it. I am linking to JQuery BEFORE Validator in the header like instructed. My uber simple form looks like this: <form id="form1" method="post" action=""> <div class="form-row"><span class="label">Username *</span><input type="text" name="username" /></div> <div class="form-row"><input class="submit" type="submit" value="Submit"></div> </form> Finally how would I go about restructing the addMethod function to return true if and false at the else stage while keeping the message alert for a false return? (ignore this last part if you don't understand what I was trying to say :) ) Thanks in advance. Thank to everyone who pointed out my JQuery - jQuery typo. New Ideally, I am trying to turn this into a simple login form (username/password). It is for demonstration only so it wont have a database attached or anything, just some simple js validations. I am looking to make the username validate for <48 characters, only english letters and numbers, no special characters. I thought a whitelist would be easiest so I had something like this: ^[a-zA-Z0-9]*${1,48} but I am not sure if that is proper JS RegExp (it varies from Ruby RegExp if I am not mistaken?...Usually I use rubular.com). Password will be similar but require some upper/lowercase and numbers. I believe I need to make another $.validator.addMethod for legalPassword that will look very similar.

    Read the article

  • Using perl to parse a file and insert specific values into a database

    - by Sean
    Disclaimer: I'm a newbie at scripting in perl, this is partially a learning exercise (but still a project for work). Also, I have a much stronger grasp on shell scripting, so my examples will likely be formatted in that mindset (but I would like to create them in perl). Sorry in advance for my verbosity, I want to make sure I am at least marginally clear in getting my point across I have a text file (a reference guide) that is a Word document converted to text then swapped from Windows to UNIX format in Notepad++. The file is uniform in that each section of the file had the same fields/formatting/tables. What I have planned to do, in a basic way is grab each section, keyed by unique batch job names and place all of the values into a database (or maybe just an excel file) so all the fields can be searched/edited for each job much easier than in the word file and possibly create a web interface later on. So what I want to do is grab each section by doing something like: sed -n '/job_name_1_regex/,/job_name_2_regex/' file.txt --how would this be formatted within a perl script? (grab the section in total, then break it down further from there) To read the file in the script I have open FORMAT_FILE, 'test_format.txt'; and then use foreach $line (<FORMAT_FILE>) to parse the file line by line. --is there a better way? My next problem is that since I converted from a word doc with tables, which looks like: Table Heading 1 Table Heading 2 Heading 1/Value 1 Heading 2/Value 1 Heading 1/Value 2 Heading 2/Value 2 but the text file it looks like: Table Heading 1 Table Heading 2Heading 1/Value 1Heading 1/Value 2Heading 2/Value 1Heading 2/Value 2 So I want to have "Heading 1" and "Heading 2" as a columns name and then put the respective values there. I just am not sure how to get the values in relation to the heading from the text file. The values of Heading 1 will always be the line number of Heading 1 plus 2 (Heading 1, Heading 2, Values for heading 1). I know this can be done in awk/sed pretty easily, just not sure how to address it inside a perl script. After I have all the right values and such, linking it up to a database may be an issue as well, I haven't started looking at the way perl interacts with DBs yet. Sorry if this is a bit scatterbrained...it's still not fully formed in my head.

    Read the article

  • How to fix this window.open memory leak?

    - by DotnetShadow
    Hi there, I was recently looking at this memory leak tool sIEve: http://home.orange.nl/jsrosman/ So I decided to test out the tool by creating a main page that will open up a popup window. I started by creating 3 pages: index.html, page1.html and page2.html, the index.html page will open a child window (popup) linking to page1.html. Page1 will have a anchor tag that links to page2.html, while page2 will have a link back to page1.html PROBLEM So in the tool I entered the index.html page, popup window opened to page1.html, I then clicked the page2 link, no leaks detected yet. While I'm on page2 I click the link back to page1, and that's where the tool claims there is a link. The leak seems to be happening on the index.html page and I have no idea as to why it would be doing that. Even more concerning is that I can see elements that the tool detects that aren't even on my page. Does anyone have any experience with this tool or know if this really is a memory leak? Any samples of showing how to achieve what I'm doing without memory leaks? INDEX.HTML <script type="text/javascript"> MYLEAK = function() { var childWindow = null; function showWindow() { childWindow = window.open("page1.html", "myWindow"); return false; } return { init: function() { $("#window-link").bind("click", showWindow); } } }(); </script> </head> <body> <a id="window-link" href="#" on>Open Window</a> <script type="text/javascript"> $(document).ready(function() { MYLEAK.init(); }); </script> </body> </html> PAGE1.HTML <html> <body> <h1>Page 1</h1> <a href="page2.html">Page2</a> </body> </html> PAGE2.HTML <html> <body> <h1>Page 2</h1> <a href="page1.html">Page1</a> </body> </html> Appreciate your efforts.

    Read the article

  • multiple definition of inline function

    - by K71993
    Hi, I have gone through some posts related to this topic but was not able to sort out my doubt completly. This might be a very navie question. Code Description I have a header file "inline.h" and two translation unit "main.cpp" and "tran.cpp". Details of code are as below inline.h file details #ifndef __HEADER__ #include <stdio.h> extern inline int func1(void) { return 5; } static inline int func2(void) { return 6; } inline int func3(void) { return 7; } #endif main.c file details are below #define <stdio.h> #include <inline.h> int main(int argc, char *argv[]) { printf("%d\n",func1()); printf("%d\n",func2()); printf("%d\n",func3()); return 0; } tran.cpp file details (Not that the functions are not inline here) #include <stdio.h> int func1(void) { return 500; } int func2(void) { return 600; } int func3(void) { return 700; } Question The above code does not compile in gcc compiler whereas compiles in g++ (Assuming you make changes related to gcc in code like changing the code to .c not using any C++ header files... etc). The error displayed is "duplicate definition of inline function - func3". Can you clarify why this difference is present across compile? When you run the program (g++ compiled) by creating two seperate compilation unit (main.o and tran.o and create an executable a.out), the output obtained is 500 6 700 Why does the compiler pick up the definition of the function which is not inline. Actually since #include is used to "add" the inline definiton I had expected 5,6,7 as the output. My understanding was during compilation since the inline definition is found, the function call would be "replaced" by inline function definition. Can you please tell me in detailed steps the process of compilation and linking which would lead us to 500,6,700 output. I can only understand the output 6. Thanks in advance for valuable input.

    Read the article

  • MS Access MSChart.Graph.8 not printing

    - by Tanj
    Software: Microsoft Access 2007 SP2 Database File Version: Access 2000 I have an access program that I inherited from a previous employee. It uses forms for reports and since I don't have much experience in access I have continued to do this. I have created a copy of the program for another project and modified it to suit. I am having trouble getting more then one chart to print. All the charts display in form view, they all have the same properties (excepting data, position, etc.) For some reason they are not printing. They don't even show up in the print preview. I am thinking it must be something with the graphs themselves as they sometimes lose all information. I have to open the graphs in edit mode and change the data source from column to row and back again so that it gets redrawn. (Refresh doesn't fix it) So right now I don't even have a clue as to where to look so ideas are welcome. Edit #1 It seems to be a problem with linking to an unbound form. Subform Field Linker: Can't build a link between unbound forms. The query for the main form is SELECT tTest.ixTest, tMotorTypes.ixMotorType, tMotorTypes.asMotorType, tMotorTypes.fDeprecated, tTestType.asTest, tTest.asSerialNum, tTest.asOrderNum, tTest.asFrameNum, tTest.asRotorNum, tTest.asOperator, tTest.iStation, tTest.dtTestDate, tTest.ixTestType FROM tMotorTypes INNER JOIN (tTestType INNER JOIN tTest ON tTestType.ixTestType=tTest.ixTestType) ON tMotorTypes.ixMotorType=tTest.ixMotorType; The query for the chart is: SELECT qGraphRSTTemperatures.Frequency, qGraphRSTTemperatures.[Drive End], qGraphRSTTemperatures.[Non Drive End], qGraphRSTTemperatures.[Air In], qGraphRSTTemperatures.Core FROM qGraphRSTTemperatures ORDER BY qGraphRSTTemperatures.ixTemperature; Query qGraphRSTTemperatures: SELECT tElectricalData.dblFrequency AS Frequency, tTemperatures.dblDrvEnd AS [Drive End], tTemperatures.dblNonDrvEnd AS [Non Drive End], tTemperatures.dblAirIn AS [Air In], tTemperatures.dblCore AS Core, tSubTest.ixTest, tTemperatures.ixTemperature FROM (tSubTest INNER JOIN tElectricalData ON tSubTest.ixSubTest = tElectricalData.ixSubTest) LEFT JOIN tTemperatures ON tElectricalData.ixElectrical = tTemperatures.ixElectrical WHERE (((tSubTest.ixSubTestType)=5)) ORDER BY tSubTest.ixTest, tTemperatures.ixTemperature; So how come, in the form view it shows the graph with the correct data when linked thus: Child field: ixTest Master field: ixTest but won't print the graph. The graph will print if I remove the links, but then I have all the data from chart query as it is not limited by ixTest. edit #2 It seems to be a data retrieval/rendering issue in printing. Is there anything in printing that changes the context of records with respect to parent/child relationships?

    Read the article

  • Why won't this Jquery run on IE?

    - by Charles Marsh
    Hello All, I have this Jquery code (function($){ $.expr[':'].linkingToImage = function(elem, index, match){ // This will return true if the specified attribute contains a valid link to an image: return !! ($(elem).attr(match[3]) && $(elem).attr(match[3]).match(/\.(gif|jpe?g|png|bmp)$/i)); }; $.fn.imgPreview = function(userDefinedSettings){ var s = $.extend({ /* DEFAULTS */ // CSS to be applied to image: imgCSS: {}, // Distance between cursor and preview: distanceFromCursor: {top:2, left:2}, // Boolean, whether or not to preload images: preloadImages: true, // Callback: run when link is hovered: container is shown: onShow: function(){}, // Callback: container is hidden: onHide: function(){}, // Callback: Run when image within container has loaded: onLoad: function(){}, // ID to give to container (for CSS styling): containerID: 'imgPreviewContainer', // Class to be given to container while image is loading: containerLoadingClass: 'loading', // Prefix (if using thumbnails), e.g. 'thumb_' thumbPrefix: '', // Where to retrieve the image from: srcAttr: 'rel' }, userDefinedSettings), $container = $('<div/>').attr('id', s.containerID) .append('<img/>').hide() .css('position','absolute') .appendTo('body'), $img = $('img', $container).css(s.imgCSS), // Get all valid elements (linking to images / ATTR with image link): $collection = this.filter(':linkingToImage(' + s.srcAttr + ')'); // Re-usable means to add prefix (from setting): function addPrefix(src) { return src.replace(/(\/?)([^\/]+)$/,'$1' + s.thumbPrefix + '$2'); } if (s.preloadImages) { (function(i){ var tempIMG = new Image(), callee = arguments.callee; tempIMG.src = addPrefix($($collection[i]).attr(s.srcAttr)); tempIMG.onload = function(){ $collection[i + 1] && callee(i + 1); }; })(0); } $collection .mousemove(function(e){ $container.css({ top: e.pageY + s.distanceFromCursor.top + 'px', left: e.pageX + s.distanceFromCursor.left + 'px' }); }) .hover(function(){ var link = this; $container .addClass(s.containerLoadingClass) .show(); $img .load(function(){ $container.removeClass(s.containerLoadingClass); $img.show(); s.onLoad.call($img[0], link); }) .attr( 'src' , addPrefix($(link).attr(s.srcAttr)) ); s.onShow.call($container[0], link); }, function(){ $container.hide(); $img.unbind('load').attr('src','').hide(); s.onHide.call($container[0], this); }); // Return full selection, not $collection! return this; }; })(jQuery); It works perfectly in all browsers apart from IE, which it does nothing, no errors, no clues? I have a funny feeling IE doesn't support attr? Can anyone offer any advice?

    Read the article

  • Correct way to make datasources/resources a deploy-time setting

    - by Draemon
    I have a web-app that requires two settings: A JDBC datasource A string token I desperately want to be able to deploy one .war to various different containers (jetty,tomcat,gf3 minimum) and configure these settings at application level within the container. My code does this: InitialContext ctx = new InitialContext(); Context envCtx = (javax.naming.Context) ctx.lookup("java:comp/env"); token = (String)envCtx.lookup("token"); ds = (DataSource)envCtx.lookup("jdbc/datasource") Let's assume I've used the glassfish management interface to create two jdbc resources: jdbc/test-datasource and jdbc/live-datasource which connect to different copies of the same schema, on different servers, different credentials etc. Say I want to deploy this to glassfish with and point it at the test datasource, I might have this in my sun-web.xml: ... <resource-ref> <res-ref-name>jdbc/datasource</res-ref-name> <jndi-name>jdbc/test-datasource</jndi-name> </resource-ref> ... but sun-web.xml goes inside my war, right? surely there must be a way to do this through the management interface Am I even trying to do the right thing? Do other containers make this any easier? I'd be particularly interested in how jetty 7 handles this since I use it for development. EDIT Tomcat has a reasonable way to do this: Create $TOMCAT_HOME/conf/Catalina/localhost/webapp.xml with: <?xml version="1.0" encoding="UTF-8"?> <Context antiResourceLocking="false" privileged="true"> <!-- String resource --> <Environment name="token" value="value of token" type="java.lang.String" override="false" /> <!-- Linking to a global resource --> <ResourceLink name="jdbc/datasource1" global="jdbc/test" type="javax.sql.DataSource" /> <!-- Derby --> <Resource name="jdbc/datasource2" type="javax.sql.DataSource" auth="Container" driverClassName="org.apache.derby.jdbc.EmbeddedDataSource" url="jdbc:derby:test;create=true" /> <!-- H2 --> <Resource name="jdbc/datasource3" type="javax.sql.DataSource" auth="Container" driverClassName="org.h2.jdbcx.JdbcDataSource" url="jdbc:h2:~/test" username="sa" password="" /> </Context> Note that override="false" means the opposite. It means that this setting can't be overriden by web.xml. I like this approach because the file is part of the container configuration not the war, but it's not part of the global configuration; it's webapp specific. I guess I expect a bit more from glassfish since it is supposed to have a full web admin interface, but I would be happy enough with something equivalent to the above.

    Read the article

  • Isn't the C++ standard library backward-compatible?

    - by Chris Metzler
    Hi. I'm working on a 64-bit Linux system, trying to build some code that depends on third-party libraries for which I have binaries. During linking, I get a stream of undefined reference errors for one of the libraries, indicating that the linker couldn't resolve references to standard C++ functions/classes, e.g.: librxio.a(EphReader.o): In function `gpstk::EphReader::read_fic_data(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)': EphReader.cpp:(.text+0x27c): undefined reference to `std::basic_ostream<char, std::char_traits<char> >& std::__ostream_insert<char, std::char_traits<char> >(std::basic_ostream<char, std::char_traits<char> >&, char const*, long)' EphReader.cpp:(.text+0x4e8): undefined reference to `std::basic_ostream<char, std::char_traits<char> >& std::__ostream_insert<char, std::char_traits<char> >(std::basic_ostream<char, std::char_traits<char> >&, char const*, long)' I'm not really a C++ programmer, but this looks to me like it can't find the standard library. Doing some more research, I got the following when I looked at librxio's dependency for the standard library: $ ldd librxio.so.16.0 ./librxio.so.16.0: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.9' not found (required by ./librxio.so.16.0) libm.so.6 => /lib64/libm.so.6 (0x00002aaaaad45000) libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x00002aaaaafc8000) libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00002aaaab2c8000) libc.so.6 => /lib64/libc.so.6 (0x00002aaaab4d7000) /lib64/ld-linux-x86-64.so.2 (0x0000555555554000) So I read that as saying that librxio (one of the third-party libraries) requires at least v3.4.9 of the standard library. But the version I have installed is 4.1.2: $ rpm -qa | grep libstdc compat-libstdc++-33-3.2.3-61.x86_64 libstdc++-devel-4.1.2-14.el5.i386 libstdc++-devel-4.1.2-14.el5.x86_64 libstdc++-4.1.2-14.el5.x86_64 libstdc++-4.1.2-14.el5.i386 Shouldn't this work? The shared object major number is 6, same as for v3.4.9. At this level, shouldn't this be backward compatible? It seems like the third-party library is looking for an earlier version of the standard library than what I have installed; but isn't there backward compatibility between versions with the same major number for the shared library? Again, I'm not really a C++ programmer; but I don't see what the problem is. Any advice greatly appreciated. Thanks.

    Read the article

  • Any tips on reducing wxWidgets application code size?

    - by Billy ONeal
    I have written a minimal wxWidgets application: stdafx.h #define wxNO_REGEX_LIB #define wxNO_XML_LIB #define wxNO_NET_LIB #define wxNO_EXPAT_LIB #define wxNO_JPEG_LIB #define wxNO_PNG_LIB #define wxNO_TIFF_LIB #define wxNO_ZLIB_LIB #define wxNO_ADV_LIB #define wxNO_HTML_LIB #define wxNO_GL_LIB #define wxNO_QA_LIB #define wxNO_XRC_LIB #define wxNO_AUI_LIB #define wxNO_PROPGRID_LIB #define wxNO_RIBBON_LIB #define wxNO_RICHTEXT_LIB #define wxNO_MEDIA_LIB #define wxNO_STC_LIB #include <wx/wxprec.h> Minimal.cpp #include "stdafx.h" #include <memory> #include <wx/wx.h> class Minimal : public wxApp { public: virtual bool OnInit(); }; IMPLEMENT_APP(Minimal) DECLARE_APP(Minimal) class MinimalFrame : public wxFrame { DECLARE_EVENT_TABLE() public: MinimalFrame(const wxString& title); void OnQuit(wxCommandEvent& e); void OnAbout(wxCommandEvent& e); }; BEGIN_EVENT_TABLE(MinimalFrame, wxFrame) EVT_MENU(wxID_ABOUT, MinimalFrame::OnAbout) EVT_MENU(wxID_EXIT, MinimalFrame::OnQuit) END_EVENT_TABLE() MinimalFrame::MinimalFrame(const wxString& title) : wxFrame(0, wxID_ANY, title) { std::auto_ptr<wxMenu> fileMenu(new wxMenu); fileMenu->Append(wxID_EXIT, L"E&xit\tAlt-X", L"Terminate the Minimal Example."); std::auto_ptr<wxMenu> helpMenu(new wxMenu); helpMenu->Append(wxID_ABOUT, L"&About\tF1", L"Show the about dialog box."); std::auto_ptr<wxMenuBar> bar(new wxMenuBar); bar->Append(fileMenu.get(), L"&File"); fileMenu.release(); bar->Append(helpMenu.get(), L"&Help"); helpMenu.release(); SetMenuBar(bar.get()); bar.release(); CreateStatusBar(2); SetStatusText(L"Welcome to wxWidgets!"); } void MinimalFrame::OnAbout(wxCommandEvent& e) { wxMessageBox(L"Some text about me!", L"About", wxOK, this); } void MinimalFrame::OnQuit(wxCommandEvent& e) { Close(); } bool Minimal::OnInit() { std::auto_ptr<MinimalFrame> mainFrame( new MinimalFrame(L"Minimal wxWidgets Application")); mainFrame->Show(); mainFrame.release(); return true; } This minimal program weighs in at 2.4MB! (Executable compression drops this to half a MB or so but that's still HUGE!) (I must statically link because this application needs to be single-binary-xcopy-deployed, so both the C runtime and wxWidgets itself are set for static linking) Any tips on cutting this down? (I'm using Microsoft Visual Studio 2010)

    Read the article

  • Two seperate tm structs mirroring each other

    - by BSchlinker
    Here is my current situation: I have two tm structs, both set to the current time I make a change to the hour in one of the structs The change is occurring in the other struct magically.... How do I prevent this from occurring? I need to be able to compare and know the number of seconds between two different times -- the current time and a time in the future. I've been using difftime and mktime to determine this. I recognize that I don't technically need two tm structs (the other struct could just be a time_t loaded with raw time) but I'm still interested in understanding why this occurs. void Tracker::monitor(char* buffer){ // time handling time_t systemtime, scheduletime, currenttime; struct tm * dispatchtime; struct tm * uiuctime; double remainingtime; // let's get two structs operating with current time dispatchtime = dispatchtime_tm(); uiuctime = uiuctime_tm(); // set the scheduled parameters dispatchtime->tm_hour = 5; dispatchtime->tm_min = 05; dispatchtime->tm_sec = 14; uiuctime->tm_hour = 0; // both of these will now print the same time! (0:05:14) // what's linking them?? // print the scheduled time printf ("Current Time : %2d:%02d:%02d\n", uiuctime->tm_hour, uiuctime->tm_min, uiuctime->tm_sec); printf ("Scheduled Time : %2d:%02d:%02d\n", dispatchtime->tm_hour, dispatchtime->tm_min, dispatchtime->tm_sec); } struct tm* Tracker::uiuctime_tm(){ time_t uiucTime; struct tm *ts_uiuc; // give currentTime the current time time(&uiucTime); // change the time zone to UIUC putenv("TZ=CST6CDT"); tzset(); // get the localtime for the tz selected ts_uiuc = localtime(&uiucTime); // set back the current timezone unsetenv("TZ"); tzset(); // set back our results return ts_uiuc; } struct tm* Tracker::dispatchtime_tm(){ time_t currentTime; struct tm *ts_dispatch; // give currentTime the current time time(&currentTime); // get the localtime for the tz selected ts_dispatch = localtime(&currentTime); // set back our results return ts_dispatch; }

    Read the article

  • How can I have a Makefile automatically rebuild source files that include a modified header file? (I

    - by Nicholas Flynt
    I have the following makefile that I use to build a program (a kernel, actually) that I'm working on. Its from scratch and I'm learning about the process, so its not perfect, but I think its powerful enough at this point for my level of experience writing makefiles. AS = nasm CC = gcc LD = ld TARGET = core BUILD = build SOURCES = source INCLUDE = include ASM = assembly VPATH = $(SOURCES) CFLAGS = -Wall -O -fstrength-reduce -fomit-frame-pointer -finline-functions \ -nostdinc -fno-builtin -I $(INCLUDE) ASFLAGS = -f elf #CFILES = core.c consoleio.c system.c CFILES = $(foreach dir,$(SOURCES),$(notdir $(wildcard $(dir)/*.c))) SFILES = assembly/start.asm SOBJS = $(SFILES:.asm=.o) COBJS = $(CFILES:.c=.o) OBJS = $(SOBJS) $(COBJS) build : $(TARGET).img $(TARGET).img : $(TARGET).elf c:/python26/python.exe concat.py stage1 stage2 pad.bin core.elf floppy.img $(TARGET).elf : $(OBJS) $(LD) -T link.ld -o $@ $^ $(SOBJS) : $(SFILES) $(AS) $(ASFLAGS) $< -o $@ %.o: %.c @echo Compiling $<... $(CC) $(CFLAGS) -c -o $@ $< #Clean Script - Should clear out all .o files everywhere and all that. clean: -del *.img -del *.o -del assembly\*.o -del core.elf My main issue with this makefile is that when I modify a header file that one or more C files include, the C files aren't rebuilt. I can fix this quite easily by having all of my header files be dependencies for all of my C files, but that would effectively cause a complete rebuild of the project any time I changed/added a header file, which would not be very graceful. What I want is for only the C files that include the header file I change to be rebuilt, and for the entire project to be linked again. I can do the linking by causing all header files to be dependencies of the target, but I cannot figure out how to make the C files be invalidated when their included header files are newer. I've heard that GCC has some commands to make this possible (so the makefile can somehow figure out which files need to be rebuilt) but I can't for the life of me find an actual implementation example to look at. Can someone post a solution that will enable this behavior in a makefile? EDIT: I should clarify, I'm familiar with the concept of putting the individual targets in and having each target.o require the header files. That requires me to be editing the makefile every time I include a header file somewhere, which is a bit of a pain. I'm looking for a solution that can derive the header file dependencies on its own, which I'm fairly certain I've seen in other projects.

    Read the article

  • CMake: Mac OS X: ld: unknown option: -soname

    - by Alex Ivasyuv
    I try to build my app with CMake on Mac OS X, I get the following error: Linking CXX shared library libsml.so ld: unknown option: -soname collect2: ld returned 1 exit status make[2]: *** [libsml.so] Error 1 make[1]: *** [CMakeFiles/sml.dir/all] Error 2 make: *** [all] Error 2 This is strange, as Mac has .dylib extension instead of .so. There's my CMakeLists.txt: cmake_minimum_required(VERSION 2.6) PROJECT (SilentMedia) SET(SourcePath src/libsml) IF (DEFINED OSS) SET(OSS_src ${SourcePath}/Media/Audio/SoundSystem/OSS/DSP/DSP.cpp ${SourcePath}/Media/Audio/SoundSystem/OSS/Mixer/Mixer.cpp ) ENDIF(DEFINED OSS) IF (DEFINED ALSA) SET(ALSA_src ${SourcePath}/Media/Audio/SoundSystem/ALSA/DSP/DSP.cpp ${SourcePath}/Media/Audio/SoundSystem/ALSA/Mixer/Mixer.cpp ) ENDIF(DEFINED ALSA) SET(SilentMedia_src ${SourcePath}/Utils/Base64/Base64.cpp ${SourcePath}/Utils/String/String.cpp ${SourcePath}/Utils/Random/Random.cpp ${SourcePath}/Media/Container/FileLoader.cpp ${SourcePath}/Media/Container/OGG/OGG.cpp ${SourcePath}/Media/PlayList/XSPF/XSPF.cpp ${SourcePath}/Media/PlayList/XSPF/libXSPF.cpp ${SourcePath}/Media/PlayList/PlayList.cpp ${OSS_src} ${ALSA_src} ${SourcePath}/Media/Audio/Audio.cpp ${SourcePath}/Media/Audio/AudioInfo.cpp ${SourcePath}/Media/Audio/AudioProxy.cpp ${SourcePath}/Media/Audio/SoundSystem/SoundSystem.cpp ${SourcePath}/Media/Audio/SoundSystem/libao/AO.cpp ${SourcePath}/Media/Audio/Codec/WAV/WAV.cpp ${SourcePath}/Media/Audio/Codec/Vorbis/Vorbis.cpp ${SourcePath}/Media/Audio/Codec/WavPack/WavPack.cpp ${SourcePath}/Media/Audio/Codec/FLAC/FLAC.cpp ) SET(SilentMedia_LINKED_LIBRARY sml vorbisfile FLAC++ wavpack ao #asound boost_thread-mt boost_filesystem-mt xspf gtest ) INCLUDE_DIRECTORIES( /usr/include /usr/local/include /usr/include/c++/4.4 /Users/alex/Downloads/boost_1_45_0 ${SilentMedia_SOURCE_DIR}/src ${SilentMedia_SOURCE_DIR}/${SourcePath} ) #link_directories( # /usr/lib # /usr/local/lib # /Users/alex/Downloads/boost_1_45_0/stage/lib #) IF(LibraryType STREQUAL "static") ADD_LIBRARY(sml-static STATIC ${SilentMedia_src}) # rename library from libsml-static.a => libsml.a SET_TARGET_PROPERTIES(sml-static PROPERTIES OUTPUT_NAME "sml") SET_TARGET_PROPERTIES(sml-static PROPERTIES CLEAN_DIRECT_OUTPUT 1) ELSEIF(LibraryType STREQUAL "shared") ADD_LIBRARY(sml SHARED ${SilentMedia_src}) # change compile optimization/debug flags # -Werror -pedantic IF(BuildType STREQUAL "Debug") SET_TARGET_PROPERTIES(sml PROPERTIES COMPILE_FLAGS "-pipe -Wall -W -ggdb") ELSEIF(BuildType STREQUAL "Release") SET_TARGET_PROPERTIES(sml PROPERTIES COMPILE_FLAGS "-pipe -Wall -W -O3 -fomit-frame-pointer") ENDIF() SET_TARGET_PROPERTIES(sml PROPERTIES CLEAN_DIRECT_OUTPUT 1) ENDIF() ### TEST ### IF(Test STREQUAL "true") ADD_EXECUTABLE (bin/TestXSPF ${SourcePath}/Test/Media/PlayLists/XSPF/TestXSPF.cpp) TARGET_LINK_LIBRARIES (bin/TestXSPF ${SilentMedia_LINKED_LIBRARY}) ADD_EXECUTABLE (bin/test1 ${SourcePath}/Test/test.cpp) TARGET_LINK_LIBRARIES (bin/test1 ${SilentMedia_LINKED_LIBRARY}) ADD_EXECUTABLE (bin/TestFileLoader ${SourcePath}/Test/Media/Container/FileLoader/TestFileLoader.cpp) TARGET_LINK_LIBRARIES (bin/TestFileLoader ${SilentMedia_LINKED_LIBRARY}) ADD_EXECUTABLE (bin/testMixer ${SourcePath}/Test/testMixer.cpp) TARGET_LINK_LIBRARIES (bin/testMixer ${SilentMedia_LINKED_LIBRARY}) ENDIF (Test STREQUAL "true") ### TEST ### ADD_CUSTOM_TARGET(doc COMMAND doxygen ${SilentMedia_SOURCE_DIR}/doc/Doxyfile) There was no error on Linux. Build process: cmake -D BuildType=Debug -D LibraryType=shared . make I found, that incorrect command generate in CMakeFiles/sml.dir/link.txt. But why, as the goal of CMake is cross-platforming.. How to fix it?

    Read the article

  • libXcodeDebuggerSupport.dylib is missing in iOS 4.2.1 development SDK

    - by Kalle
    Note: creating a symbolic link to use the 4.2 lib seems to work fine -- maybe cd /Developer/Platforms/iPhoneOS.platform/DeviceSupport/4.2.1\ \(8C148\)/Symbols/ sudo ln -s ../../4.2 (8C134)/Symbols/Developer Request: See end of this question! After upgrading from 4.2.0 (beta, I believe) to 4.2.1, the libXcodeDebuggerSupport.dylib file is missing, which results in: warning: Unable to read symbols for /Developer/Platforms/iPhoneOS.platform/DeviceSupport/4.2.1 (8C148)/Symbols/Developer/usr/lib/libXcodeDebuggerSupport.dylib (file not found). which I guess isn't good. Looking at the directory in question I note: .../DeviceSupport/4.2 (8C134)/Symbols/Developer/usr/lib/libXcodeDebuggerSupport.dylib but .../DeviceSupport/4.2.1 (8C148)/Symbols/System/ .../DeviceSupport/4.2.1 (8C148)/Symbols/usr/ the above two dirs make up all the content in the 4.2.1 folder. No "Developer" folder. Checking the /usr/ dir there, I find no libXcodeDebuggerSupport.dylib file in the lib dir either, so ln -s'ing isn't an option. Worth mentioning: after the upgrade, I plugged the iPad in and had to click "Use for development" in Xcode organizer. Doing so, I got a message about symbols missing for that version, and Xcode proceeded to generate such, then failed. I restored the iPad and did "Use for development" again, and nothing about missing symbols appeared... Update: deletion of /Developer and reinstallation of Xcode from scratch does not fix this issue. Update 2: I just realized that after the reinstall of Xcode, .../DeviceSupport/4.2 (8C134)/Symbols is now a symbolic link, lrwxr-xr-x 1 root admin 36 Dec 3 17:17 Symbols -> ../../Developer/SDKs/iPhoneOS4.2.sdk And the directory in question has the appropriate files. Maybe this is simply a matter of linking the 4.2.1 dir in the same fashion? I'll try that and see if Xcode freaks out. If someone who has this file could provide a md5 sum that would be splendid. This is what it says for me: $ md5 /Developer/Platforms/iPhoneOS.platform/DeviceSupport/4.2\ \(8C134\)/Symbols/Developer/usr/lib/libXcodeDebuggerSupport.dylib MD5 (/Developer/Platforms/iPhoneOS.platform/DeviceSupport/4.2 (8C134)/Symbols/Developer/usr/lib/libXcodeDebuggerSupport.dylib) = 08f93a0a2e3b03feaae732691f112688 If the MD5 sum is identical to the output of $ md5 /Developer/Platforms/iPhoneOS.platform/DeviceSupport/4.2.1\ \(8C148\)/Symbols/Developer/usr/lib/libXcodeDebuggerSupport.dylib then we're all set.

    Read the article

  • Makefile issue with compiling a C++ program

    - by Steve
    I recently got MySQL compiled and working on Cygwin, and got a simple test example from online to verify that it worked. The test example compiled and ran successfully. However, when incorporating MySQL in a hobby project of mine it isn't compiling which I believe is due to how the Makefile is setup, I have no experience with Makefiles and after reading tutorials about them, I have a better grasp but still can't get it working correctly. When I try and compile my hobby project I recieve errors such as: Obj/Database.o:Database.cpp:(.text+0x492): undefined reference to `_mysql_insert_id' Obj/Database.o:Database.cpp:(.text+0x4c1): undefined reference to `_mysql_affected_rows' collect2: ld returned 1 exit status make[1]: *** [build] Error 1 make: *** [all] Error 2 Here is my Makefile, it worked with compiling and building the source before I attempted to put in MySQL support into the project. The LIBMYSQL paths are correct, verified by 'mysql_config'. COMPILER = g++ WARNING1 = -Wall -Werror -Wformat-security -Winline -Wshadow -Wpointer-arith WARNING2 = -Wcast-align -Wcast-qual -Wredundant-decls LIBMYSQL = -I/usr/local/include/mysql -L/usr/local/lib/mysql -lmysqlclient DEBUGGER = -g3 OPTIMISE = -O C_FLAGS = $(OPTIMISE) $(DEBUGGER) $(WARNING1) $(WARNING2) -export-dynamic $(LIBMYSQL) L_FLAGS = -lz -lm -lpthread -lcrypt $(LIBMYSQL) OBJ_DIR = Obj/ SRC_DIR = Source/ MUD_EXE = project MUD_DIR = TestP/ LOG_DIR = $(MUD_DIR)Files/Logs/ ECHOCMD = echo -e L_GREEN = \e[1;32m L_WHITE = \e[1;37m L_BLUE = \e[1;34m L_RED = \e[1;31m L_NRM = \e[0;00m DATE = `date +%d-%m-%Y` FILES = $(wildcard $(SRC_DIR)*.cpp) C_FILES = $(sort $(FILES)) O_FILES = $(patsubst $(SRC_DIR)%.cpp, $(OBJ_DIR)%.o, $(C_FILES)) all: @$(ECHOCMD) " Compiling $(L_RED)$(MUD_EXE)$(L_NRM)."; @$(MAKE) -s build build: $(O_FILES) @rm -f $(MUD_EXE) $(COMPILER) -o $(MUD_EXE) $(L_FLAGS) $(O_FILES) @echo " Finished Compiling $(MUD_EXE)."; @chmod g+w $(MUD_EXE) @chmod a+x $(MUD_EXE) @chmod g+w $(O_FILES) $(OBJ_DIR)%.o: $(SRC_DIR)%.cpp @echo " Compiling $@"; $(COMPILER) -c $(C_FLAGS) $< -o $@ .cpp.o: $(COMPILER) -c $(C_FLAGS) $< clean: @echo " Complete compile on $(MUD_EXE)."; @rm -f $(OBJ_DIR)*.o $(MUD_EXE) @$(MAKE) -s build I like the functionality of the Makefile, instead of spitting out all the arguments etc, it just spits out the "Compiling [Filename]" etc. If I add -c to the L_FLAGS then it compiles (I think) but instead spits out stuff like: g++: Obj/Database.o: linker input file unused because linking not done After a full day of trying and research on google, I'm no closer to solving my problem, so I come to you guys to see if you can explain to me why all this is happening and if possible, steps to solve. Regards, Steve

    Read the article

  • Having trouble doing an Update with a Linq to Sql object

    - by Pure.Krome
    Hi folks, i've got a simple linq to sql object. I grab it from the database and change a field then save. No rows have been updated. :( When I check the full Sql code that is sent over the wire, I notice that it does an update to the row, not via the primary key but on all the fields via the where clause. Is this normal? I would have thought that it would be easy to update the field(s) with the where clause linking on the Primary Key, instead of where'ing (is that a word :P) on each field. here's the code... using (MyDatabase db = new MyDatabase()) { var boardPost = (from bp in db.BoardPosts where bp.BoardPostId == boardPostId select bp).SingleOrDefault(); if (boardPost != null && boardPost.BoardPostId > 0) { boardPost.ListId = listId; // This changes the value from 0 to 'x' db.SubmitChanges(); } } and here's some sample sql.. exec sp_executesql N'UPDATE [dbo].[BoardPost] SET [ListId] = @p6 WHERE ([BoardPostId] = @p0) AND .... <snip the other fields>',N'@p0 int,@p1 int,@p2 nvarchar(9),@p3 nvarchar(10),@p4 int,@p5 datetime,@p6 int',@p0=1276,@p1=212787,@p2=N'ttreterte',@p3=N'ttreterte3',@p4=1,@p5='2009-09-25 12:32:12.7200000',@p6=72 Now, i know there's a datetime field in this update .. and when i checked the DB it's value was/is '2009-09-25 12:32:12.720' (less zero's, than above) .. so i'm not sure if that is messing up the where clause condition... but still! should it do a where clause on the PK's .. if anything .. for speed! Yes / no ? UPDATE After reading nitzmahone's reply, I then tried playing around with the optimistic concurrency on some values, and it still didn't work :( So then I started some new stuff ... with the optimistic concurrency happening, it includes a where clause on the field it's trying to update. When that happens, it doesn't work. so.. in the above sql, the where clause looks like this ... WHERE ([BoardPostId] = @p0) AND ([ListId] IS NULL) AND ... <rest snipped>) This doesn't sound right! the value in the DB is null, before i do the update. but when i add the ListId value to the where clause (or more to the point, when L2S add's it because of the optomistic concurrecy), it fails to find/match the row. wtf?

    Read the article

  • How to properly preload images, js and css files?

    - by Kenny Bones
    Hi, I'm creating a website from scratch and I was really into this in the late 90's but the web has changed alot since then! And I'm more of a designer so when I started putting this site together, I basically did a system of php includes to make the site more "dynamic" When you first visit the site, you'll be presented to a logon screen, if you're not already logged on (cookies). If you're not logged on, a page called access.php is introdused. I thought I'd preload the most heavy images at this point. So that when the user is done logging on, the images are already cached. And this is working as I want. But I still notice that the biggest image still isn't rendered immediatly anyway. So it's seems kinda pointless. All of this has made me rethink how the site is structured and how scripts and css files are loaded. Using FireBug and YSlow with Firefox I see a few pointers like expires headers and reducing the size of each script. But is this really the culprit? For example, would this be really really stupid in the main index.php? The entire site is basically structured like this <?php require("dbconnect.php"); ?> <?php include ("head.php"); ?> And below this is basically just the body and the content of the site. Head.php however consists of the doctype, head portions, linking of two css style sheets, jQuery library, jQuery validation engine, Cufon and Cufon font file, and then the small Cufon.Replace snippet. The rest of the body comes with the index.php file, but at the bottom of this again is an include of a file called "footer.php" which basically consists of loading of a couple of jsLoader scripts and a slidepanel and then a js function. All of this makes the end page source look like a typical complete webpage, but I'm wondering if any of you can see immediatly that "this is really really stupid" and "don't do that, do this instead" etc. :) Are includes a bad way to go? This site is also pretty image intensive and I can probably do a little more optimization. But I don't think that's its the primary culprit. YSlow gives me a report of what takes up the most space: doc(1) - 5.8K js(5) - 198.7K css(2) - 5.6K cssimage(8) - 634.7K image(6) - 110.8K I know it looks like it's cssimage(8) that weighs the most, but I've already preloaded these images from before and it doesn't really affect the rendering.

    Read the article

  • How do you select form elements in JQuery based upon an html table?

    - by Swoop
    I am working on some ASP.NET web forms which involves some dynamic generation, and I need to add some onClick helpers on the client side. I have a basic outline of something working, except for one huge problem. There are multiple HTML tables, each generated by a different ASP.NET web control. Each table can contain overlapping field names, which is causing a problem with my JQuery click event handlers. The click event handler is linking to unintended form fields in addition to the intended form field. I have provided a simplified sample version of the code below. This code is trying to set the value of textbox box1 when a particular radiobutton is selected in the table with id=thing1. Obviously, the jquery code will be triggered for the form fields in both tables. The tables are dynamically added to the webpage based upon different conditions. It is possible that no tables will be loaded, only 1 table, or both tables might load. In the future, other tables could be added. Each table comes from a different .net web control. Other than renaming the form fields to make sure they are unique across all user controls, is there a way to have JQuery act only on the intended form fields? In other words, could the table ID be incorporated into the JQuery code in a manner that does not become a nightmare to maintain later? <script> $(document).ready(function() { $("[id$=radio1_0]").click(function() { $("[id$=box1]").attr("value", ""); }); $("[id$=radio1_1]").click(function() { $("[id$=box1]").attr("value", "N/A"); }); </script> <table id="thing1"> <tr><td> <radiobuttonlist id="radio1"/> <listitem>yes</listitem> <listitem>no</listitem> </td></tr> <tr><td> <textbox id="box1"/> </td></tr> </table> <table id="thing2"> <tr><td> <radiobuttonlist id="radio1"/> <listitem>yes</listitem> <listitem>no</listitem> </td></tr> <tr><td> <textbox id="box1"/> </tr></td> </table>

    Read the article

  • How to get Augmented Reality: A Practical Guide examples working?

    - by Glen
    I recently bought the book: Augmented Reality: A Practical Guide (http://pragprog.com/titles/cfar/augmented-reality). It has example code that it says runs on Windows, MacOS and Linux. But I can't get the binaries to run. Has anyone got this book and got the binaries to run on ubuntu? I also can't figure out how to compile the examples in Ubuntu. How would I do this? Here is what it says to do: Compiling for Linux Refreshingly, there are no changes required to get the programs in this chapter to compile for Linux, but as with Windows, you’ll first have to find your GL and GLUT files. This may mean you’ll have to download the correct version of GLUT for your machine. You need to link in the GL, GLU, and GLUT libraries and provide a path to the GLUT header file and the files it includes. See whether there is a glut.h file in the /usr/include/GL directory; otherwise, look elsewhere for it—you could use the command find / -name "glut.h" to search your entire machine, or you could use the locate command (locate glut.h). You may need to customize the paths, but here is an example of the compile command: gcc -o opengl_template opengl_template.cpp -I /usr/include/GL -I /usr/include -lGL -lGLU -lglut gcc is a C/C++ compiler that should be present on your Linux or Unix machine. The -I /usr/include/GL command-line argument tells gcc to look in /usr/include/GL for the include files. In this case, you’ll find glut.h and what it includes. When linking in libraries with gcc, you use the -lX switch—where X is the name of your library and there is a correspond- ing libX.a file somewhere in your path. For this example, you want to link in the library files libGL.a, libGLU.a, and libglut.a, so you will use the gcc arguments -lGL -lGLU -lglut. These three files are found in the default directory /usr/lib/, so you don’t need to specify their location as you did with glut.h. If you did need to specify the library path, you would add -L to the path. To run your compiled program, type ./opengl_template or, if the current directory is in your shell’s paths, just opengl_template. When working in Linux, it’s important to know that you may need to keep your texture files to a maximum of 256 by 256 pixels or find the settings in your system to raise this limit. Often an OpenGL program will work in Windows but produce a blank white texture in Linux until the texture size is reduced. The above instructions make no sense to me. Do I have to use gcc to compile or can I use eclipse? If I use either eclipse or gcc what do I need to do to compile and run the program?

    Read the article

  • assignment not working in a dll exported C++ class

    - by Jim Jones
    Using VS 2008 Have a C++ class in which I'm calling functions from a 3rd party dll. The definition in the header file is as follows: namespace OITImageExport { class ImageExport { private: SCCERR seResult; /* Error code returned. */ VTHDOC hDoc; /* Input doc handle returned by DAOpenDocument(). */ VTHEXPORT hExport; /* Handle to the export returned by EXOpenExport(). */ VTDWORD dwFIFlags; /* Used in setting the SCCOPT_FIFLAGS option. */ VTCHAR szError[256]; /* Error string buffer. */ VTDWORD dwOutputId; /* Output Format. */ VTDWORD dwSpecType; public: ImageExport(const char* outputId, const char* specType); void ProcessDocument(const char* inputPath, const char* outputPath); ~ImageExport(); }; } In the constructor I initialize two of the class fields having values which come from enumerations in the 3rd party dll: ImageExport::ImageExport(const char* outputId, const char* specType) { if(outputId == "jpeg") { dwOutputId = FI_JPEGFIF; } if(specType == "ansi") { dwSpecType = IOTYPE_ANSIPATH; } seResult = DAInit(); if (seResult != SCCERR_OK) { DAGetErrorString(seResult, szError, sizeof(szError)); fprintf(stderr, "DAInit() failed: %s (0x%04X)\n", szError, seResult); exit(seResult); } } When I use this class inside of a console app, with a main method in another file (all in the same namespace), instantiating the class object and calling the methods, it works like a champ. So, now that I know the basic code works, I open a dll project using the class header and code file. Course I have to add the dll macro, namely: #ifdef IMAGEDLL_EXPORTS #define DLL __declspec(dllexport) #else #define DLL __declspec(dllimport) #endif and changed the class definition to "class DLL ImageExport". Compiled nicely to a dll and .lib file (No errors, No warnings). Now to test this dll I open another console project using the same main method as before and linking to the (dll) lib file. Had problems, which when tracked down were the result of the two fields not being set; both had values of 0. Went back to the first console app and printed out the values: dwOutputId was 1535 (#define FI_JPEGFIF 1535) and dwSpecType was 2 (#define IOTYPE_ANSIPATH 2). Now if I was assigning these values outside of the class, I can see how the visibility could be different, but why is the assignment in the dll not working? Is it something about having a class in the dll?

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75  | Next Page >