Search Results

Search found 13068 results on 523 pages for 'copy and paste'.

Page 445/523 | < Previous Page | 441 442 443 444 445 446 447 448 449 450 451 452  | Next Page >

  • Why am I getting "(304) Not Modified" error on some links when using HttpWebRequest?

    - by Greg
    Hi, Any ideas why on some links that I try to access using HttpWebRequest I am getting "The remote server returned an error: (304) Not Modified." in the code? The code I'm using is from Jeff's post here. Note the concept of the code is a simple proxy server, so I'm pointing my browser at this locally running piece of code, which gets my browsers request, and then proxies it on by creating a new HttpWebRequest, as you'll see in the code. It works great for most sites/links, but for some this error comes up. You will see one key bit in the code is where it seems to copy the http header settings from the browser request to it's request out to the site, and it copies in the header attributes. Not sure if the issue is something to do with how it mimics this aspect of the request and then what happens as the result comes back? case "If-Modified-Since": request.IfModifiedSince = DateTime.Parse(listenerContext.Request.Headers[key]); break; I get the issue for example from http://en.wikipedia.org/wiki/Main_Page thanks

    Read the article

  • Handling missing/incomplete data in R--is there function to mask but not remove NAs?

    - by doug
    As you would expect from a DSL aimed at data analysis, R handles missing/incomplete data very well, for instance: Many R functions have an 'na.rm' flag that you can set to 'T' to remove the NAs: mean( c(5,6,12,87,9,NA,43,67), na.rm=T) But if you want to deal with NAs before the function call, you need to do something like this: to remove each 'NA' from a vector: vx = vx[!is.na(a)] to remove each 'NA' from a vector and replace it w/ a '0': ifelse(is.na(vx), 0, vx) to remove entire each row that contains 'NA' from a data frame: dfx = dfx[complete.cases(dfx),] All of these functions permanently remove 'NA' or rows with an 'NA' in them. Sometimes this isn't quite what you want though--making an 'NA'-excised copy of the data frame might be necessary for the next step in the workflow but in subsequent steps you often want those rows back (e.g., to calculate a column-wise statistic for a column that has missing rows caused by a prior call to 'complete cases' yet that column has no 'NA' values in it). to be as clear as possible about what i'm looking for: python/numpy has a class, 'masked array', with a 'mask' method, which lets you conceal--but not remove--NAs during a function call. Is there an analogous function in R?

    Read the article

  • IO operation taking long time for files in remote server

    - by user841311
    I have files of size 150 MB each in a remote server in a different domain in the network. I am accessing them thorugh UNC path. I want to read the file content and perform a basic string search. When I try reading the files line by line, the operation just don't finish and takes long time, more than 30 minutes. However when I copy those files to my local machine, the same code reads and performs the string search in less than 5 seconds. I don't have .NET framework installed in the server so I have to do this from my machine. I want to perform all this through C# code in .NET framework 3.5 so I don't want to explictly ftp all the files to my machine before performing this operation. Sample Code DirectoryInfo dir = new DirectoryInfo(@strFilePath); FileInfo[] fiArray = dir.getFiles("*.txt"); foreach (FileInfo fi in fiArray) { //reading file content from server takes long time but fast in local machine //perform string search } Let me know if my requirement is not clear. Thanks in advance!

    Read the article

  • Global resources can't be resolved after publishing Website in VS2008

    - by Scoregraphic
    Hi there I have a web-project running in VS 2008. We have some global resource files (*.resx) in the App_GlobalResources folder for internationalisation. All this works like a charm on my local IIS installation out of VS. But when I publish my web-project to the local filesystem and/or another server, all the resources can no longer be found. So I guess the pre-compilation is somehow corrupting stuff. When I call the pre-compiled web, I get an error that the resource object with key xyz cannot be found, although it could be found before. I checked with .NET reflector if the resource stuff made it into the *.dlls. All those identifiers are there (bin/Web.dll, bin/<culture>/Web.resources.dll). The identifiers are loaded like this: <asp:MenuItem NavigateUrl="~/OrderNew.aspx" Text="<%$ Resources:MyProject, MenuNewOrder %>" Value="NewOrder"> The resource files are called MyProject.resx and MyProject.<culture>.resx where <culture> corresponds the the specific culture (i.e. MyProject.de-DE.resx). Any ideas how to solve this? I really appreciate any help. Thanks Edit: If I copy the App_GlobalResources folder manually to the output, the resources may be loaded normally. So I really really wonder what this pre-compilation is all about. I'm still interested in solving the issue "the right way".

    Read the article

  • How do i select the preceding nodes of a text node starting from a specific node and not the root no

    - by Rachel
    How do i select the preceding nodes of a text node starting from a specific node whose id i know instead of getting the text nodes from the root node? When i invoke the below piece from a template match of text node, I get all the preceding text nodes from the root. I want to modify the above piece of code to select only the text nodes that appear after the node having a specific id say 123. i.e something like //*[@id='123'] <xsl:template match="text()[. is $text-to-split]"> <xsl:variable name="split-index" as="xsd:integer" select="$index - sum(preceding::text()/string-length(.))"/> <xsl:value-of select="substring(., 1, $split-index - 1)"/> <xsl:copy-of select="$new"/> <xsl:value-of select="substring(., $split-index)"/> </xsl:template> <xsl:variable name="text-to-split" as="text()?" select="descendant::text()[sum((preceding::text(), .)/string-length(.)) ge $index][1]"/> How do i include the condition in places where i use preceding::text inorder to select preceding text nodes relative to the specific node's id which i know?

    Read the article

  • Improving the performance of XSL

    - by Rachel
    I am using the below XSL 2.0 code to find the ids of the text nodes that contains the list of indices that i give as input. the code works perfectly but in terms for performance it is taking a long time for huge files. Even for huge files if the index values are small then the result is quick in few ms. I am using saxon9he Java processor to execute the XSL. <xsl:variable name="insert-data" as="element(data)*"> <xsl:for-each-group select="doc($insert-file)/insert-data/data" group-by="xsd:integer(@index)"> <xsl:sort select="current-grouping-key()"/> <data index="{current-grouping-key()}" text-id="{generate-id( $main-root/descendant::text()[ sum((preceding::text(), .)/string-length(.)) ge current-grouping-key() ][1] )}"> <xsl:copy-of select="current-group()/node()"/> </data> </xsl:for-each-group> </xsl:variable> In the above solution if the index value is too huge say 270962 then the time taken for the XSL to execute is 83427ms. In huge files if the index value is huge say 4605415, 4605431 it takes several minutes to execute. Seems the computation of the variable "insert-data" takes time though it is a global variable and computed only once. Should the XSL be addessed or the processor? How can i improve the performance of the XSL.

    Read the article

  • Who architected / designed C++'s IOStreams, and would it still be considered well-designed by today'

    - by stakx
    First off, it may seem that I'm asking for subjective opinions, but that's not what I'm after. I'd love to hear some well-grounded arguments on this topic. In the hope of getting some insight into how a modern streams / serialization framework ought to be designed, I recently got myself a copy of the book Standard C++ IOStreams and Locales by Angelika Langer and Klaus Kreft. I figured that if IOStreams wasn't well-designed, it wouldn't have made it into the C++ standard library in the first place. After having read various parts of this book, I am starting to have doubts if IOStreams can compare to e.g. the STL from an overall architectural point-of-view. Read e.g. this interview with Alexander Stepanov (the STL's "inventor") to learn about some design decisions that went into the STL. What surprises me in particular: It seems to be unknown who was responsible for IOStreams' overall design (I'd love to read some background information about this — does anyone know good resources?); Once you delve beneath the immediate surface of IOStreams, e.g. if you want to extend IOStreams with your own classes, you get to an interface with fairly cryptic and confusing member function names, e.g. getloc/imbue, uflow/underflow, snextc/sbumpc/sgetc/sgetn, pbase/pptr/epptr (and there's probably even worse examples). This makes it so much harder to understand the overall design and how the single parts co-operate. Even the book I mentioned above doesn't help that much (IMHO). Thus my question: If you had to judge by today's software engineering standards (if there actually is any general agreement on these), would C++'s IOStreams still be considered well-designed? (I wouldn't want to improve my software design skills from something that's generally considered outdated.)

    Read the article

  • conceptual question : what do performSelectorOnMainThread do?

    - by hib
    Hello all, I just come across this strange situation . I was using the technique of lazy image loading from apple examples . when I was used the class in my application it gave me topic to learn but don't what was actually happening there . So here goes the scenario : I think everybody has seen the apple lazytableimagesloading . I was reloading my table on finishing the parsing of data : - (void)didFinishParsing:(NSMutableArray *)appList { self.upcomingArray = [NSMutableArray arrayWithArray:loadedApps]; // we are finished with the queue and our ParseOperation [self.upcomingTableView reloadData]; self.queue = nil; // we are finished with the queue and our ParseOperation } but as a result the the connection was not starting and images were not loading . when I completely copy the lazyimageloading and I replaced the above code with the following code it works fine - (void)didFinishParsing:(NSMutableArray *)appList { [self performSelectorOnMainThread:@selector(handleLoadedApps:) withObject:appList waitUntilDone:NO]; self.queue = nil; // we are finished with the queue and our ParseOperation } So I want to know what is the concept behind this . Please let me know if you can not understand the question or details are not enough because I desperately want to know why it is like this ?

    Read the article

  • Moving inserted container element if possible

    - by doublep
    I'm trying to achieve the following optimization in my container library: when inserting an lvalue-referenced element, copy it to internal storage; but when inserting rvalue-referenced element, move it if supported. The optimization is supposed to be useful e.g. if contained element type is something like std::vector, where moving if possible would give substantial speedup. However, so far I was unable to devise any working scheme for this. My container is quite complicated, so I can't just duplicate insert() code several times: it is large. I want to keep all "real" code in some inner helper, say do_insert() (may be templated) and various insert()-like functions would just call that with different arguments. My best bet code for this (a prototype, of course, without doing anything real): #include <iostream> #include <utility> struct element { element () { }; element (element&&) { std::cerr << "moving\n"; } }; struct container { void insert (const element& value) { do_insert (value); } void insert (element&& value) { do_insert (std::move (value)); } private: template <typename Arg> void do_insert (Arg arg) { element x (arg); } }; int main () { { // Shouldn't move. container c; element x; c.insert (x); } { // Should move. container c; c.insert (element ()); } } However, this doesn't work at least with GCC 4.4 and 4.5: it never prints "moving" on stderr. Or is what I want impossible to achieve and that's why emplace()-like functions exist in the first place?

    Read the article

  • How to load a library that depends on another library, all from a jar file

    - by Philip
    I would like to ship my application as a self-contained jar file. The jar file should contain all the class files, as well as two shared libraries. One of these shared libraries is written for the JNI and is essentially an indirection to the other one (which is 100% C). I have first tried running my jar file without the libraries, but having them accessible through the LD_LIBRARY_PATH environment variable. That worked fine. I then put the JNI library into the jar file. I have read about loading libraries from jar files by copying them first to some temporary directory, and that worked well for me (note that the 100% C library was, I suppose, loaded as before). Now I want to put both libraries into the jar, but I don't understand how I can make sure that they will both be loaded. Sure I can copy them both to a temporary directory, but when I load the "indirection" one, it always gives me: java.lang.UnsatisfiedLinkError: /tmp/.../libindirect.so: /libpure.so: cannot open shared object file: No such file or directory I've tried to force the JVM to load the "100% C" library first by explicitely calling System.load(...) on its temporary file, but that didn't work better. I suspect the system is looking for it when resolving the links in libindirect.so but doesn't care about what the JVM loaded. Can anyone help me on that one? Thanks

    Read the article

  • Calculate the digital root of a number

    - by Gregory Higley
    A digital root, according to Wikipedia, is "the number obtained by adding all the digits, then adding the digits of that number, and then continuing until a single-digit number is reached." For instance, the digital root of 99 is 9, because 9 + 9 = 18 and 1 + 8 = 9. My Haskell solution -- and I'm no expert -- is as follows. digitalRoot n | n < 10 = n | otherwise = digitalRoot . sum . map (\c -> read [c]) . show $ n As a language junky, I'm interested in seeing solutions in as many languages as possible, both to learn about those languages and possibly to learn new ways of using languages I already know. (And I know at least a bit of quite a few.) I'm particularly interested in the tightest, most elegant solutions in Haskell and REBOL, my two principal "hobby" languages, but any ol' language will do. (I pay the bills with unrelated projects in Objective C and C#.) Here's my (verbose) REBOL solution: digital-root: func [n [integer!] /local added expanded] [ either n < 10 [ n ][ expanded: copy [] foreach c to-string n [ append expanded to-integer to-string c ] added: 0 foreach e expanded [ added: added + e ] digital-root added ] ] EDIT: As some have pointed out either directly or indirectly, there's a quick one-line expression that can calculate this. You can find it in several of the answers below and in the linked Wikipedia page. (I've awarded Varun the answer, as the first to point it out.) Wish I'd known about that before, but we can still bend our brains with this question by avoiding solutions that involve that expression, if you're so inclined. If not, Crackoverflow has no shortage of questions to answer. :)

    Read the article

  • Weird bug with C++ lambda expressions in VS2010

    - by Andrei Tita
    In a couple of my projects, the following code: class SmallClass { public: int x1, y1; void TestFunc() { auto BadLambda = [&]() { int g = x1 + 1; //ok int h = y1 + 1; //c2296 int l = static_cast<int>(y1); //c2440 }; int y1_copy = y1; //it works if you create a local copy auto GoodLambda = [&]() { int h = y1_copy + 1; //ok int l = this->y1 + 1; //ok }; } }; generates error C2296: '+' : illegal, left operand has type 'double (__cdecl *)(double)' or alternatively error C2440: 'static_cast' : cannot convert from 'double (__cdecl *)(double)' to 'int' You get the picture. It also happens if catching by value. The error seems to be tied to the member name "y1". It happened in different classes, different projects and with (seemingly) any type for y1; for example, this code: [...] MyClass y1; void TestFunc() { auto BadLambda = [&]()->void { int l = static_cast<int>(y1); //c2440 }; } generates both these errors: error C2440: 'static_cast' : cannot convert from 'MyClass' to 'int' No user-defined-conversion operator available that can perform this conversion, or the operator cannot be called error C2440: 'static_cast' : cannot convert from 'double (__cdecl *)(double)' to 'int' There is no context in which this conversion is possible It didn't, however, happen in a completely new project. I thought maybe it was related to Lua (the projects where I managed to reproduce this bug both used Lua), but I did not manage to reproduce it in a new project linking Lua. It doesn't seem to be a known bug, and I'm at a loss. Any ideas as to why this happens? (I don't need a workaround; there are a few in the code already). Using Visual Studio 2010 Express version 10.0.40219.1 Sp1Rel.

    Read the article

  • Displaying Plist data into UItableview

    - by Christien
    I have a plist with Dictionary and numbers of strings per dictionary.show into the url below.and this list of items is in thousands in the plist. I need to display these plist data into the UItableview . How to do this? My Code: - (void)viewWillAppear:(BOOL)animated { // get paths from root direcory NSArray *documentPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [documentPaths objectAtIndex:0]; NSString *documentPlistPath = [documentsDirectory stringByAppendingPathComponent:@"p.plist"]; NSDictionary *dict = [NSDictionary dictionaryWithContentsOfFile:documentPlistPath]; valueArray = [dict objectForKey:@"title"]; self.mySections=[valueArray copy]; NSLog(@"value array %@",self.mySections); } - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { return [self.mySections count]; } -(NSString *)tableView:(UITableView *)tableView titleForHeaderInSection:(NSInteger)section { NSString *key = [[self.mySections objectAtIndex:section]objectForKey:@"pass"]; return [NSString stringWithFormat:@"%@", key]; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return 5; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[UITableViewCell alloc] initWithStyle:UITableViewCellStyleValue1 reuseIdentifier:CellIdentifier]; } // Configure the cell... NSUInteger section = [indexPath section]; NSUInteger row = [indexPath row]; cell.textLabel.text = [[self.mySections objectAtIndex:row] objectForKey:@"title"]; cell.detailTextLabel.text=[[self.mySections objectAtIndex:section] objectForKey:[allKeys objectAtIndex:1]]; return cell; } Error is 2012-12-17 16:21:07.372 Project[2076:11603] * Terminating app due to uncaught exception 'NSRangeException', reason: '-[__NSCFArray objectAtIndex:]: index (4) beyond bounds (4)' * First throw call stack: (0x1703052 0x1523d0a 0x16aba78 0x16ab9e9 0x16fcc60 0x1d03a 0x391e0f 0x392589 0x37ddfd 0x38c851 0x337301 0x1704e72 0x277592d 0x277f827 0x2705fa7 0x2707ea6 0x2707580 0x16d79ce 0x166e670 0x163a4f6 0x1639db4 0x1639ccb 0x1505879 0x150593e 0x2f8a9b 0x2158 0x20b5) terminate called throwing an exceptionkill

    Read the article

  • How do I create a new project in TFS from an existing project (breaking history)?

    - by Lindsay
    My team is taking over a project from a previous team. We use a different TFS server than the original team, and we are also not interested in keeping the history of the project because we are accepting the latest version of the code as the beginning of our history with the project. Branching is not an option since we want to start our history from the current version of the code. We just want a fresh project with the existing code. I have not been able to create the new project from the old code successfully. I keep getting an error: "Source control cannot add the solution: Solution would span multiple workspaces" My process for attempting the new project creation: Create a workspace for the previous team's version of the code. Get latest version of that code into local mapped workspace directory Open the solution. Unbind all projects and solution. Close solution. Create a workspace for the new version of the code on our TFS server. Copy the unbound code on my local box to the new local workspace mapped folder. Open the solution from the new directory. "Add to source control" from the new solution. Then I get the error. I have tried removing the TFS security files out of the code directories in the unbound version and tried changing source control instead of adding to source control (but it just binds back to the original instead of letting me bind to the new). Is there any other way to do this besides recreating the solution/projects and adding back all the files and references? It doesn't seem like it should be this difficult... Any advice much appreciated!

    Read the article

  • WYSIWYG in Doxygen

    - by Adam Shiemke
    I'm working on a fairly large project written in C. The idea was to build a library of modular blocks that can be reused across several platforms. Each module is assocaited with a word document in .docx format (huge pain to diff-merge). In these docs, an interface section is specified, listing datatypes and publicly accessable functions. These were often inconsistant with the actual implementation in code, and wading through all this documentation was a pain. I've been working to switch to doxygen to simplify document managemnet. I haven't found a good way to embed the previously written documentation into the doxygen output. I've copy-pasted them into sections and used modules to group the sources together, but the document sections look ugly in the comments (the output is pretty) and since doxygen takes a while to parse through our code (about 30 mins), validating formatting is a pain. Is there some way to WYSIWIG large blocks of documentation into doxygen? I feel this would improve the number of people documenting their code, and the quality of that documentation. I considered linking to html, but that splits out the documentation. I also considered putting them inline in html, but this also seems like a pain and would mean everyone needs a WYSIWIG HTML edditor (or some html skillz). Any ideas on how to make things easier and prettier? Thanks loads.

    Read the article

  • Several Objective-C objects become Invalid for no reason, sometimes.

    - by farnsworth
    - (void)loadLocations { NSString *url = @"<URL to a text file>"; NSStringEncoding enc = NSUTF8StringEncoding; NSString *locationString = [[NSString alloc] initWithContentsOfURL:[NSURL URLWithString:url] usedEncoding:&enc error:nil]; NSArray *lines = [locationString componentsSeparatedByString:@"\n"]; for (int i=0; i<[lines count]; i++) { NSString *line = [lines objectAtIndex:i]; NSArray *components = [line componentsSeparatedByString:@", "]; Restaurant *res = [byID objectForKey:[components objectAtIndex:0]]; if (res) { NSString *resAddress = [components objectAtIndex:3]; NSArray *loc = [NSArray arrayWithObjects:[components objectAtIndex:1], [components objectAtIndex:2]]; [res.locationCoords setObject:loc forKey:resAddress]; } else { NSLog([[components objectAtIndex:0] stringByAppendingString:@" res id not found."]); } } } There are a few weird things happening here. First, at the two lines where the NSArray lines is used, this message is printed to the console- *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[NSCFDictionary count]: method sent to an uninitialized mutable dictionary object' which is strange since lines is definitely not an NSMutableDictionary, definitely is initialized, and because the app doesn't crash. Also, at random points in the loop, all of the variables that the debugger can see will become Invalid. Local variables, property variables, everything. Then after a couple lines they will go back to their original values. setObject:forKey never has an effect on res.locationCoords, which is an NSMutableDictionary. I'm sure that res, res.locationCoords, and byId are initialized. I also tried adding a retain or copy to lines, same thing. I'm sure there's a basic memory management principle I'm missing here but I'm at a loss.

    Read the article

  • MSBuild script fails but produces no errors

    - by Kate
    I have a MSBuild script that I am executing through TeamCity. One of the tasks that is runs is from Xheo DeploxLX CodeVeil which obfuscates some DLLs. The task I am using is called VeilProject. I have run the CodeVeil Project through the interface manually and it works correctly, so I think I can safely assume that the actual obfuscate process is ok. This task used to take around 40 minutes and the rest of the MSBuild file executed perfectly and finished without errors. For some reason this task is now taking 1hr 20 minutes or so to execute. Once the VeilProject task is finished the output from the task says it completely successfully, however the MSBuild script fails at this point. I have a task directly after the VeilProject task and it does not get outputted. Using diagnostic output from MSBUild I can see the following: My questions are: Would it be possible that the MSBuild script has timed out? Once the task has completed it is after a certain timeout period so it just fails? Why would the build fail with no errors and no warnings? [05:39:06]: [Target "Obfuscate"] Finished. [05:39:06]: [Target "Obfuscate"] Saving exception map [05:49:21]: [Target "Obfuscate"] Ended at 11/05/2010 05:49:21, ~1 hour, 48 minutes, 6 seconds [05:49:22]: [Target "Obfuscate"] Done. [05:49:51]: MSBuild output: Ended at 11/05/2010 05:49:21, ~1 hour, 48 minutes, 6 seconds (TaskId:8) Done. (TaskId:8) Done executing task "VeilProject" -- FAILED. (TaskId:8) Done building target "Obfuscate" in project "AMK_Release.proj.teamcity.patch.tcprojx" -- FAILED.: (TargetId:12) Done Building Project "C:\Builds\Scripts\AMK_Release.proj.teamcity.patch.tcprojx" (All target(s)) -- FAILED. Project Performance Summary: 6535484 ms C:\Builds\Scripts\AMK_Release.proj.teamcity.patch.tcprojx 1 calls 6535484 ms All 1 calls Target Performance Summary: 156 ms PreClean 1 calls 266 ms SetBuildVersionNumber 1 calls 2406 ms CopyFiles 1 calls 6532391 ms Obfuscate 1 calls Task Performance Summary: 16 ms MakeDir 2 calls 31 ms TeamCitySetBuildNumber 1 calls 31 ms Message 1 calls 62 ms RemoveDir 2 calls 234 ms GetAssemblyIdentity 1 calls 2406 ms Copy 1 calls 6528047 ms VeilProject 1 calls Build FAILED. 0 Warning(s) 0 Error(s) Time Elapsed 01:48:57.46 [05:49:52]: Process exit code: 1 [05:49:55]: Build finished

    Read the article

  • How to include external classes in a GAE deployment?

    - by kodra
    I am using the Google plug-in for Eclipse and have the following problem: The project consists of a GWT based GUI talking to a server running on GAE and using JPA. Additionally there is a project to migrate the legacy data to the new datastore. Since these both project use common data model, I have extracted a set of interfaces and enums into a separate project and set the other two projects dependencies on it. The Java App project seems to work, but the GWT/GAE only works if I manually copy the classes into the WEB-INF/classes directory. Obviously this is only working when using the housted mode. Anybody knows how to configure such a multi project setup in Eclipse? Also, I am not sure if the multi project layout is the best solution. The set of common model objects is used in all 3 areas: user client (GWT project compiling standard folders client and shared) server side (providing services for GWT-RPC, uploading and different feeds) migration application (posting the legacy data to the upload servlet) What are the architectural options to keep the amount of duplicated classes on minimum?

    Read the article

  • How can you make the copyright text in a Google Map wrap when the map is small?

    - by Paul D. Waite
    When you embed a Google Map on a web page, copyright text is included on the map. This is the HTML: <div style="border-top: 10px solid rgb(204, 0, 0); -moz-user-select: none; z-index: 0; position: absolute; right: 3px; bottom: 2px; color: black; font-family: Arial,sans-serif; font-size: 11px; white-space: normal; text-align: right; margin-left: 70px; width: 210px;" dir="ltr"> <span></span> <span>Map data &copy;2010 LeadDog Consulting, Europa Technologies - </span> <a href="http://www.google.com/intl/en_ALL/help/terms_maps.html" target="_blank" class="gmnoprint terms-of-use-link" style="color: rgb(119, 119, 204);">Terms of Use</a> <span></span> </div> If you embed a map with a small width, the copyright text extends outside of the <div>, instead of wrapping within it. I’ve tried using jQuery to select this HTML based on its contents (using :contains()), but it doesn’t seem to work in IE 8 (which is odd, as it works fine in IE 7). Any idea what’s up with IE 8? Any other methods to achieve the same result?

    Read the article

  • Sybase stored procedure - how do I create an index on a #table?

    - by DVK
    I have a stored procedure which creates and works with a temporary #table Some of the queries would be tremendously optimized if that temporary #table would have an index created on it. However, creating an index within the stored procedure fails: create procedure test1 as SELECT f1, f2, f3 INTO #table1 FROM main_table WHERE 1 = 2 -- insert rows into #table1 create index my_idx on #table1 (f1) SELECT f1, f2, f3 FROM #table1 (index my_idx) WHERE f1 = 11 -- "QUERY X" When I call the above, the query plan for "QUERY X" shows a table scan. If I simply run the code above outside the stored procedure, the messages show the following warning: Index 'my_idx' specified as optimizer hint in the FROM clause of table '#table1' does not exist. Optimizer will choose another index instead. This can be resolved when running ad-hoc (outside the stored procedure) by splitting the code above in two batches by addding "go" after index creation: create index my_idx on #table1 (f1) go Now, "QUERY X" query plan shows the use of index "my_idx". QUESTION: How do I mimique running the "create index" in a separate batch when it's inside the stored procedure? I can't insert a "go" there like I do with the ad-hoc copy above. P.S. If it matters, this is on Sybase 12.

    Read the article

  • What is a good way to assign order #s to ordered rows in a table in Sybase

    - by DVK
    I have a table T (structure below) which initially contains all-NULL values in an integer order column: col1 varchar(30), col2 varchar(30), order int NULL I also have a way to order the "colN" columns, e.g. SELECT * FROM T ORDER BY some_expression_involving_col1_and_col2 What's the best way to assign - IN SQL - numeric order values 1-N to the order table, so that the order values match the order of rows returned by the above ORDER BY? In other words, I would like a single query (Sybase SQL syntax so no Oracle's rowcount) which assigns order values so that SELECT * FROM T ORDER BY order returns 100% same order of rows as the query above. The query does NOT necessarily need to update the table T in place, I'm OK with creating a copy of the table T2 if that'll make the query simpler. NOTE1: A solution must be real query or a set of queries, not involving a loop or a cursor. NOTE2: Assume that the data is uniquely orderable according to the order by above - no need to worry about situation when 2 rows can be assigned the same order at random. NOTE3: I would prefer a generic solution, but if you wish a specific example of ordering expression, let's say: SELECT * FROM T ORDER BY CASE WHEN col1="" THEN "AAAAAA" ELSE col1 END, ISNULL(col2, "ZZZ")

    Read the article

  • Is there a significant mechanical difference between these faux simulations of default parameters?

    - by ccomet
    C#4.0 introduced a very fancy and useful thing by allowing default parameters in methods. But C#3.0 doesn't. So if I want to simulate "default parameters", I have to create two of that method, one with those arguments and one without those arguments. There are two ways I could do this. Version A - Call the other method public string CutBetween(string str, string left, string right, bool inclusive) { return str.CutAfter(left, inclusive).CutBefore(right, inclusive); } public string CutBetween(string str, string left, string right) { return CutBetween(str, left, right, false); } Version B - Copy the method body public string CutBetween(string str, string left, string right, bool inclusive) { return str.CutAfter(left, inclusive).CutBefore(right, inclusive); } public string CutBetween(string str, string left, string right) { return str.CutAfter(left, false).CutBefore(right, false); } Is there any real difference between these? This isn't a question about optimization or resource usage or anything (though part of it is my general goal of remaining consistent), I don't even think there is any significant effect in picking one method or the other, but I find it wiser to ask about these things than perchance faultily assume.

    Read the article

  • Distributing a bundle of files across an extranet

    - by John Zwinck
    I want to be able to distribute bundles of files, about 500 MB per bundle, to all machines on a corporate "extranet" (which is basically a few LANs connected using various private mechanisms, including leased lines and VPN). The total number of hosts is roughly 100, and the goal is to get a copy of the bundle from one host onto all the other hosts reliably, quickly, and efficiently. One important issue is that some hosts are grouped together on single fast LANs in which case the network I/O should be done once from one group to the next and then within each group between all the peers. This is as opposed to a strict central server system where multiple hosts might each fetch the same bundle over a slow link, rather than once via the slow link and then between each other quickly. A new bundle will be produced every few days, and occasionally old bundles will be deleted (but that problem can be solved separately). The machines in question happen to run recent Linuxes, but bonus points will go to solutions which are at least somewhat cross-platform (in which case the bundle might differ per platform but maybe the same mechanism can be used). That's pretty much it. I'm not opposed to writing some code to handle this, but it would be preferable if it were one of bash, Python, Ruby, Lua, C, or C++.

    Read the article

  • TortoiseSVN lists files as modified, but they are identical

    - by BJ Safdie
    I am merging a hot fix from our QA branch back into our Dev branch. Five files have changed. I do a fresh checkout of the Dev branch. I then do a merge (range of revisions) from QA into the Dev working copy. It brings in five files and there is a conflict on an external and ignore property -- which I resolve by "using local" (dev). When I check modifications or commit, I expect to see the five files I merged as the only changes. However, I get close to 700 "modified" files showing up in the commit dialog. If I select one of these file and "Compare with base," WinMerge comes up and says the "files are identical." I have tried this with the file dates set to "last committed" and not. Why are all of these files showing up as modified, when they are identical? What in the merge is causing this? How do I prevent SVN/TortoiseSVN from getting confused this way in the future?

    Read the article

  • Why do case class companion objects extend FunctionN?

    - by retronym
    When you create a case class, the compiler creates a corresponding companion object with a few of the case class goodies: an apply factory method matching the primary constructor, equals, hashCode, and copy. Somewhat oddly, this generated object extends FunctionN. scala> case class A(a: Int) defined class A scala> A: (Int => A) res0: (Int) => A = <function1> This is only the case if: There is no manually defined companion object There is exactly one parameter list There are no type arguments The case class isn't abstract. Seems like this was added about two years ago. The latest incarnation is here. Does anyone use this, or know why it was added? It increases the size of the generated bytecode a little with static forwarder methods, and shows up in the #toString() method of the companion objects: scala> case class A() defined class A scala> A.toString res12: java.lang.String = <function0>

    Read the article

< Previous Page | 441 442 443 444 445 446 447 448 449 450 451 452  | Next Page >