Search Results

Search found 18092 results on 724 pages for 'matt long'.

Page 589/724 | < Previous Page | 585 586 587 588 589 590 591 592 593 594 595 596  | Next Page >

  • Passing a parameter in a Report's Open Event to a parameter query (Access 2007)

    - by JPM
    Hi there, I would like to know if there is a way to set the parameters in an Access 2007 query using VBA. I am new to using VBA in Access, and I have been tasked with adding a little piece of functionality to an existing app. The issue I am having is that the same report can be called in two different places in the application. The first being on a command button on a data entry form, the other from a switchboard button. The report itself is based on a parameter query that has requires the user to enter a Supplier ID. The user would like to not have to enter the Supplier ID on the data entry form (since the form displays the Supplier ID already), but from the switchboard, they would like to be prompted to enter a Supplier ID. Where I am stuck is how to call the report's query (in the report's open event) and pass the SupplierID from the form as the parameter. I have been trying for a while, and I can't get anything to work correctly. Here is my code so far, but I am obviously stumped. Private Sub Report_Open(Cancel As Integer) Dim intSupplierCode As Integer 'Check to see if the data entry form is open If CurrentProject.AllForms("frmExample").IsLoaded = True Then 'Retrieve the SupplierID from the data entry form intSupplierCode = Forms![frmExample]![SupplierID] 'Call the parameter query passing the SupplierID???? DoCmd.OpenQuery "qryParams" Else 'Execute the parameter query as normal DoCmd.OpenQuery "qryParams"????? End If End Sub I've tried Me.SupplierID = intSupplierCode, and although it compiles, it bombs when I run it. And here is my SQL code for the parameter query: PARAMETERS [Enter Supplier] Long; SELECT Suppliers.SupplierID, Suppliers.CompanyName, Suppliers.ContactName, Suppliers.ContactTitle FROM Suppliers WHERE (((Suppliers.SupplierID)=[Enter Supplier])); I know there are ways around this problem (and probably an easy way as well) but like I said, my lack of experience using Access and VBA makes things difficult. If any of you could help, that would be great!

    Read the article

  • Web service performing asynchronous call

    - by kornelijepetak
    I have a webservice method FetchNumber() that fetches a number from a database and then returns it to the caller. But just before it returns the number to the caller, it needs to send this number to another service so instantiates and runs the BackgroundWorker whose job is to send this number to another service. public class FetchingNumberService : System.Web.Services.WebService { [WebMethod] public int FetchNumber() { int value = Database.GetNumber(); AsyncUpdateNumber async = new AsyncUpdateNumber(value); return value; } } public class AsyncUpdateNumber { public AsyncUpdateNumber(int number) { sendingNumber = number; worker = new BackgroundWorker(); worker.DoWork += asynchronousCall; worker.RunWorkerAsync(); } private void asynchronousCall(object sender, DoWorkEventArgs e) { // Sending a number to a service (which is Synchronous) here } private int sendingNumber; private BackgroundWorker worker; } I don't want to block the web service (FetchNumber()) while sending this number to another service, because it can take a long time and the caller does not care about sending the number to another service. Caller expects this to return as soon as possible. FetchNumber() makes the background worker and runs it, then finishes (while worker is running in the background thread). I don't need any progress report or return value from the background worker. It's more of a fire-and-forget concept. My question is this. Since the web service object is instantiated per method call, what happens when the called method (FetchNumber() in this case) is finished, while the background worker it instatiated and ran is still running? What happens to the background thread? When does GC collect the service object? Does this prevent the background thread from executing correctly to the end? Are there any other side-effects on the background thread? Thanks for any input.

    Read the article

  • real time stock quotes, StreamReader performance optimization

    - by sean717
    I am working on a program that extracts real time quote for 900+ stocks from a website. I use HttpWebRequest to send HTTP request to the site and store the response to a stream and open a stream using the following code: HttpWebResponse response = (HttpWebResponse)request.GetResponse(); Stream stream = response.GetResponseStream (); StreamReader reader = new StreamReader( stream ) the size of the received HTML is large (5000+ lines), so it takes a long time to parse it and extract the price. For 900 files, It takes about 6 mins for parsing and extracting. Which my boss isn't happy with, he told me he'd want the whole process to be done in TWO mins. I've identified the part of the program that takes most of time to finish is parsing and extracting. I've tried to optimize the code to make it faster, the following is what I have now after some optimization: // skip lines at the top for(int i=0;i<1500;++i) reader.ReadLine(); // read the line that contains the price string theLine = reader.ReadLine(); // ... extract the price from the line now it takes about 4 mins to process all the files, there is still a significant gap to what my boss's expecting. So I am wondering, is there other way that I can further speed up the parsing and extracting and have everything done within 2 mins?

    Read the article

  • PHP Socket Server vs node.js: Web Chat

    - by Eliasdx
    I want to program a HTTP WebChat using long-held HTTP requests (Comet), ajax and websockets (depending on the browser used). Userdatabase is in mysql. Chat is written in PHP except maybe the chat stream itself which could also be written in javascript (node.js): I don't want to start a php process per user as there is no good way to send the chat messages between these php childs. So I thought about writing an own socket server in either PHP or node.js which should be able to handle more then 1000 connections (chat users). As a purely web developer (php) I'm not much familiar with sockets as I usually let web server care about connections. The chat messages won't be saved on disk nor in mysql but in RAM as an array or object for best speed. As far as I know there is no way to handle multiple connections at the same time in a single php process (socket server), however you can accept a great amount of socket connections and process them successive in a loop (read and write; incoming message - write to all socket connections). The problem is that there will most-likely be a lag with ~1000 users and mysql operations could slow the whole thing down which will then affect all users. My question is: Can node.js handle a socket server with better performance? Node.js is event-based but I'm not sure if it can process multiple events at the same time (wouldn't that need multi-threading?) or if there is just an event queue. With an event queue it would be just like php: process user after user. I could also spawn a php process per chat room (much less users) but afaik there are singlethreaded IRC servers which are also capable to handle thousands of users. (written in c++ or whatever) so maybe it's also possible in php. I would prefer PHP over Node.js because then the project would be php-only and not a mixture of programming languages. However if Node can process connections simultaneously I'd probably choose it.

    Read the article

  • How to benchmark on multi-core processors

    - by Pascal Cuoq
    I am looking for ways to perform micro-benchmarks on multi-core processors. Context: At about the same time desktop processors introduced out-of-order execution that made performance hard to predict, they, perhaps not coincidentally, also introduced special instructions to get very precise timings. Example of these instructions are rdtsc on x86 and rftb on PowerPC. These instructions gave timings that were more precise than could ever be allowed by a system call, allowed programmers to micro-benchmark their hearts out, for better or for worse. On a yet more modern processor with several cores, some of which sleep some of the time, the counters are not synchronized between cores. We are told that rdtsc is no longer safe to use for benchmarking, but I must have been dozing off when we were explained the alternative solutions. Question: Some systems may save and restore the performance counter and provide an API call to read the proper sum. If you know what this call is for any operating system, please let us know in an answer. Some systems may allow to turn off cores, leaving only one running. I know Mac OS X Leopard does when the right Preference Pane is installed from the Developers Tools. Do you think that this make rdtsc safe to use again? More context: Please assume I know what I am doing when trying to do a micro-benchmark. If you are of the opinion that if an optimization's gains cannot be measured by timing the whole application, it's not worth optimizing, I agree with you, but I cannot time the whole application until the alternative data structure is finished, which will take a long time. In fact, if the micro-benchmark were not promising, I could decide to give up on the implementation now; I need figures to provide in a publication whose deadline I have no control over.

    Read the article

  • program won't find math.h anymore

    - by 130490868091234
    After a long time, I downloaded a program I co-developed and tried to recompile it on my Ubuntu Linux 12.04, but it seems it does not find math.h anymore. This may be because something has changed recently in gcc, but I can't figure out if it's something wrong in src/Makefile.am or a missing dependency: Download from http://www.ub.edu/softevol/variscan/: tar xzf variscan-2.0.2.tar.gz cd variscan-2.0.2/ make distclean sh ./autogen.sh make I get: [...] gcc -DNDEBUG -O3 -W -Wall -ansi -pedantic -lm -o variscan variscan.o statistics.o common.o linefile.o memalloc.o dlist.o errabort.o dystring.o intExp.o kxTok.o pop.o window.o free.o output.o readphylip.o readaxt.o readmga.o readmaf.o readhapmap.o readxmfa.o readmav.o ran1.o swcolumn.o swnet.o swpoly.o swref.o statistics.o: In function `calculate_Fu_and_Li_D': statistics.c:(.text+0x497): undefined reference to `sqrt' statistics.o: In function `calculate_Fu_and_Li_F': statistics.c:(.text+0x569): undefined reference to `sqrt' statistics.o: In function `calculate_Fu_and_Li_D_star': statistics.c:(.text+0x63b): undefined reference to `sqrt' statistics.o: In function `calculate_Fu_and_Li_F_star': statistics.c:(.text+0x75c): undefined reference to `sqrt' statistics.o: In function `calculate_Tajima_D': statistics.c:(.text+0x85d): undefined reference to `sqrt' statistics.o:statistics.c:(.text+0xcb1): more undefined references to `sqrt' follow statistics.o: In function `calcRunMode21Stats': statistics.c:(.text+0xe02): undefined reference to `log' statistics.o: In function `correctedDivergence': statistics.c:(.text+0xe5a): undefined reference to `log' statistics.o: In function `calcRunMode22Stats': statistics.c:(.text+0x104a): undefined reference to `sqrt' statistics.o: In function `calculate_Fu_fs': statistics.c:(.text+0x11a8): undefined reference to `fabsl' statistics.c:(.text+0x11ca): undefined reference to `powl' statistics.c:(.text+0x11f2): undefined reference to `logl' statistics.o: In function `calculateStatistics': statistics.c:(.text+0x13f2): undefined reference to `log' collect2: ld returned 1 exit status make[1]: *** [variscan] Error 1 make[1]: Leaving directory `/home/avilella/variscan/latest/variscan-2.0.2/src' make: *** [all-recursive] Error 1 The libraries are there because this simple example works perfectly well: $ gcc test.c -o test -lm $ cat test.c #include <stdio.h> #include <math.h> int main(void) { double x = 0.5; double result = sqrt(x); printf("The hyperbolic cosine of %lf is %lf\n", x, result); return 0; } Any ideas?

    Read the article

  • loading google maps drawing manager object

    - by psychok7
    So i am using Google Maps Drawing Manager to draw some polygons and i am saving the lat e long coordinates to my database. Now my question is, after i load that to my array, how can i rebuild the saved polygon back into my map? I can't seem to find a code to understand that. this is what i have now : window.initialize_2 = function () { var mapOptions = { center: new google.maps.LatLng(-34.397, 150.644), zoom: 8, mapTypeId: google.maps.MapTypeId.ROADMAP }; var map = maplimits; var drawingManager = new google.maps.drawing.DrawingManager({ drawingMode: google.maps.drawing.OverlayType.MARKER, drawingControl: true, drawingControlOptions: { position: google.maps.ControlPosition.TOP_CENTER, drawingModes: [ google.maps.drawing.OverlayType.POLYGON] }, markerOptions: { icon: 'images/beachflag.png' }, polygonOptions: { fillColor: '#ffff00', fillOpacity: 10, strokeWeight: 5, clickable: true, editable: true, zIndex: 1 } }); var coord_listener = google.maps.event.addListener(drawingManager, 'polygoncomplete', function (polygon) { var coordinates = (polygon.getPath().getArray()); console.log(coordinates); window.poly = polygon; }); //delete shape google.maps.event.addListener(drawingManager, 'overlaycomplete', function (e) { if (e.type != google.maps.drawing.OverlayType.MARKER) { // Switch back to non-drawing mode after drawing a shape. drawingManager.setDrawingMode(null); // Add an event listener that selects the newly-drawn shape when the user // mouses down on it. var newShape = e.overlay; newShape.type = e.type; google.maps.event.addListener(newShape, 'click', function () { setSelection(newShape); }); setSelection(newShape); } }); // Clear the current selection when the drawing mode is changed, or when the // map is clicked. google.maps.event.addListener(drawingManager, 'drawingmode_changed', clearSelection); google.maps.event.addListener(map, 'click', clearSelection); drawingManager.setMap(map); }

    Read the article

  • Validating Time & Date To Be At Least A Certain Amount Of Time In The Future

    - by MJH
    I've built a reservation form for a taxi company which works fine, but I'm having an issue with users making reservations that are due too soon in the future. Since the entire form is kind of long, I first want to make sure the user is not trying to make a reservation for less than an hour ahead of time, without them having to fill out the whole form. This is what I have come up with so far, but it's just not working: <?php //Set local time zone. date_default_timezone_set('America/New_York'); //Get current date and time. $current_time = date('Y-m-d H:i:s'); //Set reservation time variable $res_datetime = $_POST['res_datetime']; //Set event time. $event_time = strtotime($res_datetime); ?> <!doctype html> <html> <head> <meta charset="utf-8"> <title>Check Date and Time</title> </head> <?php //Check to be sure reservation time is at least one hour in the future. if (($current_time - $event_time) <= (3600)) { echo "You must make a reservation at least one hour ahead of time."; } ?> <form name="datetime" action="" method="post"> <input name="res_datetime" type="datetime-local" id="res_datetime"> <input type="submit"> </form> <body> </body> </html> How can I create a validation check to make sure the date and time of the reservation is at least one hour ahead of time?

    Read the article

  • Time complexity to fill hash table (homework)?

    - by Heathcliff
    This is a homework question, but I think there's something missing from it. It asks: Provide a sequence of m keys to fill a hash table implemented with linear probing, such that the time to fill it is minimum. And then Provide another sequence of m keys, but such that the time fill it is maximum. Repeat these two questions if the hash table implements quadratic probing I can only assume that the hash table has size m, both because it's the only number given and because we have been using that letter to address a hash table size before when describing the load factor. But I can't think of any sequence to do the first without knowing the hash function that hashes the sequence into the table. If it is a bad hash function, such that, for instance, it hashes every entry to the same index, then both the minimum and maximum time to fill it will take O(n) time, regardless of what the sequence looks like. And in the average case, where I assume the hash function is OK, how am I suppossed to know how long it will take for that hash function to fill the table? Aren't these questions linked to the hash function stronger than they are to the sequence that is hashed? As for the second question, I can assume that, regardless of the hash function, a sequence of size m with the same key repeated m-times will provide the maximum time, because it will cause linear probing from the second entry on. I think that will take O(n) time. Is that correct? Thanks

    Read the article

  • Efficiently Serving Dynamic Content in Google App Engine

    - by awegawef
    My app on google app engine returns content items (just text) and comments on them. It works like this (pseudo-ish code): query: get keys of latest content #query to datastore for each item in content if item_dict in memcache: use item_dict else: build_item_dict(item) #by fetching from datastore store item_dict in memcache send all item_dicts to template Sorry if the code isn't understandable. I get all of the content dictionaries and send them to the template, which uses them to create the webpage. My problem is that if the memcache has expired, for each item I want to display, I have to (1) lookup item in memcache, (2) since no memcache exists I must fetch item from the datastore, and (3) store the item in memcache. These calls build up quickly. I don't set an expire time for the entries to the memcache, so this really only happens once in the morning, but the webpage takes long enough to load (~1 sec) that the browser reports it as not existing. Regularly, my webpages take about 50ms to load. This approach works decently for frequent visits, but it has its flaws as shown above. How can I remedy this? The entries are dynamic enough that I don't think it would be in my best interest to cache my initial request. Thanks in advance

    Read the article

  • How to increase PermGen memory for eclipselink StaticWeaveAntTask

    - by rayd09
    We are using Eclipselink and need to weave the code in order for lazy fetching to work property. During the weave process I'm getting the following error: weave: BUILD FAILED java.lang.OutOfMemoryError: PermGen space I have the following tasks within my ant build file: <target name="define_weave_task" description="task definition for EclipseLink static weaving"> <taskdef name="eclipse_weave" classname="org.eclipse.persistence.tools.weaving.jpa.StaticWeaveAntTask"/> </target> <target name="weave" depends="compile,define_weave_task" description="weave eclipselink code into compiled classes"> <eclipse_weave source="${path.classes}" target="${path.classes}"> <classpath refid="compile.classpath"/> </eclipse_weave> </target> It has been working great for a long time. Now that the amount of code to be woven has increased I'm getting the PermGen error. I would like to be able to up the amount of perm space. If I was doing a compile I would be able to up the perm space via a compiler argument such as <compilerarg value="-XX:MaxPermSize=256M"/> but this does not appear to be a valid argument for eclipselink weaving. How can I up the perm space for the weave?

    Read the article

  • Beginner Question: For extract a large subset of a table from MySQL, how does Indexing, order of tab

    - by chongman
    Sorry if this is too simple, but thanks in advance for helping. This is for MySQL but might be relevant for other RDMBSs tblA has 4 columns: colA, colB, colC, mydata, A_id It has about 10^9 records, with 10^3 distinct values for colA, colB, colC. tblB has 3 columns: colA, colB, B_id It has about 10^4 records. I want all the records from tblA (except the A_id) that have a match in tblB. In other words, I want to use tblB to describe the subset that I want to extract and then extract those records from tblA. Namely: SELECT a.colA, a.colB, a.colC, a.mydata FROM tblA as a INNER JOIN tblB as b ON a.colA=b.colA a.colB=b.colB ; It's taking a really long time (more than an hour) on a newish computer (4GB, Core2Quad, ubuntu), and I just want to check my understanding of the following optimization steps. ** Suppose this is the only query I will ever run on these tables. So ignore the need to run other queries. Now my questions: 1) What indexes should I create to optimize this query? I think I just need a multiple index on (colA, colB) for both tables. I don't think I need separate indexes for colA and colB. Another stack overflow article (that I can't find) mentioned that when adding new indexes, it is slower when there are existing indexes, so that might be a reason to use the multiple index. 2) Is INNER JOIN correct? I just want results where a match is found. 3) Is it faster if I join (tblA to tblB) or the other way around, (tblB to tblA)? This previous answer says that the optimizer should take care of that. 4) Does the order of the part after ON matter? This previous answer say that the optimizer also takes care of the execution order.

    Read the article

  • Regular expression to match HTML table row ( <tr> ) NOT containing a specific value

    - by user1821136
    I'm using Notepad++ to clean up a long and messy HTML table. I'm trying to use regular expressions even if I'm a total noob. :) I need to remove all the table rows that doesn't contain a specific value (may I call that substring?). After having all the file contents unwrapped, I've been able to use the following regular expression to select, one by one, every table row with all its contents: <tr>.+?</tr> How can I improve the regular expression in order to select and replace only table rows containing, somewhere inside a part of them, that defined substring? I don't know if this does matter but the structure of every table row is the following (I've put there every HTML tag, the dots stand for standard content/values) <tr> <td> ... </td> <td> ... </td> <td> <a sfref="..." href="...">!! SUBSTRING I HAVE TO MATCH HERE !!</a> </td> <td> <img /> </td> <td> ... </td> <td> ... </td> <td> ... </td> <td> ... </td> </tr> Thanks in advance for your help!

    Read the article

  • Connections hanging on read()

    - by viraptor
    Hi, Short version: I've got a strange issue with a server accepting TCP connections. Even though there are normally some processes waiting, at some volume of connections it hangs. Long version: The server is written in Perl and binds a $srv socket with the reuse flag and listen == 5. Afterwards, it forks into 10 processes with a loop of $clt=$srv->accept(); do_processing($clt); $clt->shutdown(2); The client written in C is also very simple - it sends some lines, then receives all lines available and does a shutdown(sockfd, 2); There's nothing async going on and at the end both send and receive queues are empty (as reported by netstat). Connections last only ~20ms. All clients behave the same way, are the same implementation, etc. Now let's say I'm accepting X connections from client 1 and another X from client 2. Processes still report that they're idle all the time. If I add another X connections from client 3, suddenly the server processes start hanging just after accepting. The first blocking thing they do after accept(); is while (<$clt>) ... - but they don't get any data (on the first try already). Suddenly all 10 processes are in this state and do not stop waiting. On strace, the server processes seem to hang on read(), which makes sense. There are loads of connections in TIME_WAIT state belonging to that server (~100 when the problem starts to manifest), but this might be a red herring. What could be happening here?

    Read the article

  • Research and replace Word Rtf

    - by Perello
    I'm working on an application which has a workflow for postal mails. These postal mails are generated according to my application business rules. Models are in html or Rtf and it works perfectly as long the user do not create the rtf with word. This is not within the specs, but my hierarchy would welcome a Word compatibility if it don't involve too much work, and it would please and ease the life of our customer. The Rtf models have tags which are replaced by application values. In most RTF, tags are not splitted, so the search and replace works perfectly. I wish to be handle word with few modifications. Example data : [[FooBuzz]] in most rtf it's not splited. In word 2003 : {\rtlch\fcs1 \af0 \ltrch\fcs0 \insrsid5517131 [[}{\rtlch\fcs1 \af0 \ltrch\fcs0 \insrsid2708730 FooBuzz}{\rtlch\fcs1 \af0 \ltrch\fcs0 \insrsid5517131 ]]} And their word (word 2007) splitted also Foo{garbage inside} Buzz. So i wish to be able to handle common RTF perfectly, and detect tags even if they are splitted. I have 2 constraints. First no regression, second it has to stay simple. Performance is not an issue here. I'm using symfony 1.4. The actual relevant research code part : $regExpression = '/\[\[([^\[\]]*)\]\]/'; preg_match_all($regExpression, $sTemplate, $outKeys);

    Read the article

  • TicTacToe strategic reduction

    - by NickLarsen
    I decided to write a small program that solves TicTacToe in order to try out the effect of some pruning techniques on a trivial game. The full game tree using minimax to solve it only ends up with 549,946 possible games. With alpha-beta pruning, the number of states required to evaluate was reduced to 18,297. Then I applied a transposition table that brings the number down to 2,592. Now I want to see how low that number can go. The next enhancement I want to apply is a strategic reduction. The basic idea is to combine states that have equivalent strategic value. For instance, on the first move, if X plays first, there is nothing strategically different (assuming your opponent plays optimally) about choosing one corner instead of another. In the same situation, the same is true of the center of the walls of the board, and the center is also significant. By reducing to significant states only, you end up with only 3 states for evaluation on the first move instead of 9. This technique should be very useful since it prunes states near the top of the game tree. This idea came from the GameShrink method created by a group at CMU, only I am trying to avoid writing the general form, and just doing what is needed to apply the technique to TicTacToe. In order to achieve this, I modified my hash function (for the transposition table) to enumerate all strategically equivalent positions (using rotation and flipping functions), and to only return the lowest of the values for each board. Unfortunately now my program thinks X can force a win in 5 moves from an empty board when going first. After a long debugging session, it became apparent to me the program was always returning the move for the lowest strategically significant move (I store the last move in the transposition table as part of my state). Is there a better way I can go about adding this feature, or a simple method for determining the correct move applicable to the current situation with what I have already done?

    Read the article

  • Android OS 2.2 Permissions: I have absolutely no idea why this simple piece of code doesn't work. Wh

    - by Kevin
    I'm just playing around with some code. I create an Activity and simply do something like this: long lo = currentTimeMillis(); System.out.println(lo); lo *= 3; System.out.println(lo); SystemClock.setCurrentTimeMillis(lo); System.out.println( currentTimeMillis() ); Yes, in my AndroidManifest.xml, I've added: <uses-permission android:name="android.permission.SET_TIME"></uses-permission> <uses-permission android:name="android.permission.SET_TIME_ZONE"></uses-permission> Nothing changes. The SystemClock is never reset...it just keeps on ticking. The error that I'm getting just says that the permission "SET_TIME" was not granted to the program. Protection level 3. The permissions are there...and in the API for 2.2 it says that this feature is supported now. I have no idea what I'm doing wrong. If android.content.Intent; comes into play, please explain. I don't really understand what the idea behind intents! Thanks for any help!

    Read the article

  • forcing stack w/i 32bit when -m64 -mcmodel=small

    - by chaosless
    have C sources that must compile in 32bit and 64bit for multiple platforms. structure that takes the address of a buffer - need to fit address in a 32bit value. obviously where possible these structures will use natural sized void * or char * pointers. however for some parts an api specifies the size of these pointers as 32bit. on x86_64 linux with -m64 -mcmodel=small tboth static data and malloc()'d data fit within the 2Gb range. data on the stack, however, still starts in high memory. so given a small utility _to_32() such as: int _to_32( long l ) { int i = l & 0xffffffff; assert( i == l ); return i; } then: char *cp = malloc( 100 ); int a = _to_32( cp ); will work reliably, as would: static char buff[ 100 ]; int a = _to_32( buff ); but: char buff[ 100 ]; int a = _to_32( buff ); will fail the assert(). anyone have a solution for this without writing custom linker scripts? or any ideas how to arrange the linker section for stack data, would appear it is being put in this section in the linker script: .lbss : { *(.dynlbss) *(.lbss .lbss.* .gnu.linkonce.lb.*) *(LARGE_COMMON) } thanks!

    Read the article

  • Appropriate Footwear for An Interview

    - by EoRaptor013
    There's a raging debate going on at my house about appropriate footwear for an IT interview. I have an interview, on Thursday, for a SQL/C# developer with the Fraud dept. at a large accounting firm. I was planning on wearing what I have pretty much always worn for an interview: a nice suit, white shirt, subdued tie, and a pair of dress cowboy boots. My spouse and daughter both know that my dress code for nearly every professional job I've ever gotten, is pretty much the same -- including the boots -- with what I just described. Now, however, because I've been out of work for an unfortunately long time (my last contract ended 03/09 -- pretty much coincidental with the bottom falling out of the economy). My wife insists that style standards are fundamentally different on the left side of the Mississippi vs. the right side of the river. My view is that I've always worn "cowboy" boots; since I was old enough to fit into a real pair. I moved East, as an adult, over 30 years ago, but my dress patterns haven't changed. And in all that time, my dress patterns have never changed. Now I both really want, and really need, this job. But, is that sufficient reason to change a habit 40 years in the making? I would really appreciate the thoughts ya'all (little West of Ms. colloquialism, there) might have on this matter. Thanks. P.S. If this sort of question is inappropriate for this form, I apologize.

    Read the article

  • ASP:GridView does not show data with ObjectdataSource

    - by Kashif
    I have been trying to bind a DataGrid with ObjectDataSource having custom paging but no output is display on my usercontrol. Here is the code I am using <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False" DataKeyNames="LeadId" DataSourceID="dsBuyingLead1" AllowPaging="True"> <Columns> <asp:BoundField DataField="Subject" HeaderText="Subject" ReadOnly="True" /> <asp:BoundField DataField="ExpiryDate" HeaderText="ExpiryDate" ReadOnly="True" /> </Columns> </asp:GridView> <asp:ObjectDataSource ID="dsBuyingLead1" runat="server" EnablePaging="True" DataObjectTypeName="Modules.SearchBuyingLeadInfo" OldValuesParameterFormatString="original_{0}" SelectMethod="GetAllBuyingLeads" StartRowIndexParameterName="startRow" MaximumRowsParameterName="maximumRows" SelectCountMethod="GetAllBuyingLeadsCount" TypeName="Modules.SearchController"> <SelectParameters> <asp:QueryStringParameter Name="searchText" QueryStringField="q" Type="String" /> </SelectParameters> </asp:ObjectDataSource> Here are my methods from SearchController class: [DataObjectMethod(DataObjectMethodType.Select)] public static long GetAllBuyingLeadsCount(string searchText) { return DataProvider.Instance().GetAllBuyingLeadsCount(searchText); } [DataObjectMethod(DataObjectMethodType.Select)] public static List<SearchBuyingLeadInfo> GetAllBuyingLeads (string searchText, int startRow, int maximumRows) { List<SearchBuyingLeadInfo> l = CBO.FillCollection<SearchBuyingLeadInfo> ( DataProvider.Instance() .GetAllBuyingLeadswithText(searchText, startRow, maximumRows) ); return l; } Where SearchBuyingLeadInfo is my Data Access Object class I have verified by setting up break points that both GetAllBuyingLeadsCount and GetAllBuyingLeads return non-zero values but unfortunately nothing is displayed on the grid. Only the column headers are displayed. Can anyone tell me what am I missing?

    Read the article

  • How do I declare "Member Fields" in Java?

    - by Bub
    This question probably reveals my total lack of knowledge in Java. But let me first show you what I thought was the correct way to declare a "member field": public class NoteEdit extends Activity { private Object mTitleText; private Object mBodyText; I'm following a google's notepad tutorial for android (here) and they simply said: "Note that mTitleText and mBodyText are member fields (you need to declare them at the top of the class definition)." I thought I got it and then realized that this little snippet of code wasn't working. if (title != null) { mTitleText.setText(title); } if (body != null) { mBodyText.setText(body); } So either I didn't set the "member fields" correctly which I thought all that was needed was to declare them private Objects at the top of the NoteEdit class or I'm missing something else. Thanks in advance for any help.UPDATE I was asked to show where these fields were being intialized here is another code snippet hope that it's helpful... @Override protected void onCreate(Bundle savedInstanceState) { // TODO Auto-generated method stub super.onCreate(savedInstanceState); setContentView(R.layout.note_edit); Long mRowId; mTitleText = (EditText) findViewById(R.id.title); mBodyText = (EditText) findViewById(R.id.body);

    Read the article

  • How can I differentiate a manual scroll (via mousewheel/scrollbar) from a Javascript/jQuery scroll?

    - by David Murdoch
    UPDATE: Here is a jsbin example demonstrating the problem. Basically, I have the following javascript which scrolls the window to an anchor on the page: // get anchors with href's that start with "#" $("a[href^=#]").live("click", function(){ var target = $($(this).attr("href")); // if the target exists: scroll to it... if(target[0]){ // If the page isn't long enough to scroll to the target's position // we want to scroll as much as we can. This part prevents a sudden // stop when window.scrollTop reaches its maximum. var y = Math.min(target.offset().top, $(document).height() - $(window).height()); // also, don't try to scroll to a negative value... y=Math.max(y,0); // OK, you can scroll now... $("html,body").stop().animate({ "scrollTop": y }, 1000); } return false; }); It works perfectly......until I manually try to scroll the window. When the scrollbar or mousewheel is scrolled I need to stop the current scroll animation...but I'm not sure how to do this. This is probably my starting point... $(window).scroll(e){ if(IsManuallyScrolled(e)){ $("html,body").stop(); } } ...but I'm not sure how to code the IsManuallyScrolled function. I've checked out e (the event object) in Google Chrome's console and AFAIK there is not way to differentiate between a manual scroll and jQuery's animate() scroll. How can I differentiate between a manual scroll and one called via jQuery's $.fn.animate function?

    Read the article

  • Java Scanner newline parsing with regex (Bug?)

    - by SEK
    I'm developing a syntax analyzer by hand in Java, and I'd like to use regex's to parse the various token types. The problem is that I'd also like to be able to accurately report the current line number, if the input doesn't conform to the syntax. Long story short, I've run into a problem when I try to actually match a newline with the Scanner class. To be specific, when I try to match a newline with a pattern using the Scanner class, it fails. Almost always. But when I perform the same matching using a Matcher and the same source string, it retrieves the newline exactly as you'd expect it too. Is there a reason for this, that I can't seem to discover, or is this a bug, as I suspect? FYI: I was unable to find a bug in the Sun database that describes this issue, so if it is a bug, it hasn't been reported. Example Code: Pattern newLinePattern = Pattern.compile("(\\r\\n?|\\n)", Pattern.MULTILINE); String sourceString = "\r\n\n\r\r\n\n"; Scanner scan = new Scanner(sourceString); scan.useDelimiter(""); int count = 0; while (scan.hasNext(newLinePattern)) { scan.next(newLinePattern); count++; } System.out.println("found "+count+" newlines"); // finds 7 newlines Matcher match = newLinePattern.matcher(sourceString); count = 0; while (match.find()) { count++; } System.out.println("found "+count+" newlines"); // finds 5 newlines

    Read the article

  • comma separated values in oracle function body

    - by dmitry
    I've got following oracle function but it does not work and errors out. I used Ask Tom's way to convert comma separated values to be used in select * from table1 where col1 in <> declared in package header: TYPE myTableType IS table of varchar2 (255); Part of package body: l_string long default iv_value_with_comma_separated|| ','; l_data myTableType := myTableType(); n NUMBER; begin begin LOOP EXIT when l_string is null; n := instr( l_string, ',' ); l_data.extend; l_data(l_data.count) := ltrim( rtrim( substr( l_string, 1, n-1 ) ) ); l_string := substr( l_string, n+1 ); END LOOP; end; OPEN my_cursor FOR select * from table_a where column_a in (select * from table (l_data)); CLOSE my_cursor END; above fails but it works fine when I remove select * from table (l_data) Can someone please tell me what I might be doing wrong here??

    Read the article

  • VS2010 Web Deploy: how to remove absolute paths and automate setAcl?

    - by Julien Lebosquain
    The integrated Web Deployment in Visual Studio 2010 is pretty nice. It can create a package ready to be deployed using MSDeploy on a target IIS machine. Problem is, this package will be redistributed to a client that will install it himself using the "Import Application" from IIS when MSDeploy is installed. The default package created always include the full path from the development machine, "D:\Dev\XXX\obj\Debug\Package\PackageTmp" in the source manifest file. It doesn't prevent installation of course since it was designed this way, but it looks ugly in the import dialog and has no meaning to the client. Worse he will wonder what are those paths and it looks quite confusing. By customizing the .csproj file (by adding MSBuild properties used by the package creation task), I managed to add additional parameters to the package. However, I spent most of the afternoon in the 2600 lines long Web.Publishing.targets trying to understand what parameter influenced the "development path" behavior, in vain. I also tried to use the setAcl to customize security on a given folder after deployment, but I only managed to do this with MSBuild by using a relative path... it shouldn't matter if I resolve the first problem though. I could modify the generated archive after its creation but I would prefer if everything was automatized using MSBuild. Does anyone know how to do that?

    Read the article

< Previous Page | 585 586 587 588 589 590 591 592 593 594 595 596  | Next Page >