Search Results

Search found 710 results on 29 pages for 'redundant'.

Page 22/29 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Is Stream.Write thread-safe?

    - by Mike Spross
    I'm working on a client/server library for a legacy RPC implementation and was running into issues where the client would sometimes hang when waiting to a receive a response message to an RPC request message. It turns out the real problem was in my message framing code (I wasn't handling message boundaries correctly when reading data off the underlying NetworkStream), but it also made me suspicious of the code I was using to send data across the network, specifically in the case where the RPC server sends a large amount of data to a client as the result of a client RPC request. My send code uses a BinaryWriter to write a complete "message" to the underlying NetworkStream. The RPC protocol also implements a heartbeat algorithm, where the RPC server sends out PING messages every 15 seconds. The pings are sent out by a separate thread, so, at least in theory, a ping can be sent while the server is in the middle of streaming a large response back to a client. Suppose I have a Send method as follows, where stream is a NetworkStream: public void Send(Message message) { //Write the message to a temporary stream so we can send it all-at-once MemoryStream tempStream = new MemoryStream(); message.WriteToStream(tempStream); //Write the serialized message to the stream. //The BinaryWriter is a little redundant in this //simplified example, but here because //the production code uses it. byte[] data = tempStream.ToArray(); BinaryWriter bw = new BinaryWriter(stream); bw.Write(data, 0, data.Length); bw.Flush(); } So the question I have is, is the call to bw.Write (and by implication the call to the underlying Stream's Write method) atomic? That is, if a lengthy Write is still in progress on the sending thread, and the heartbeat thread kicks in and sends a PING message, will that thread block until the original Write call finishes, or do I have to add explicit synchronization to the Send method to prevent the two Send calls from clobbering the stream?

    Read the article

  • Question on overview of C# OOP in business WinForms application - scope of Object

    - by TimR
    I may have all this OO completely wrong, but here goes: Ok the scenario is a classic order entry. Customer places an Order which has OrderLineItems of StockItems. Order is entered by Employee. 1) Application starts and asks for login/password 2) Employee selects 'Orders' from Mainmenu form 3) Orders forms opens.... 4) Employee selects Customer 5) Employee selects Stock adds to OrderLineItems 6) Selects second StockItem; add to OrderLineItems 7) Order is committed, [stock decremented, order posted to DB, Order printed] 8) Employee is returned to MainMenu Now with Object scope: 1) Application starts and asks for login/password Is this the best place to make objEmployee, to be kept whilst in this whole Sales application? 2) Employee selects 'Orders' from Mainmenu form 3) Orders forms opens.... *Make objOrderHeader, is objEmployee able to be passed in or is it created here, or re-created here.* 4) Employee selects Customer - adds/edits Customer details if required... Make objCustomer 5) Employee selects Stock adds to OrderLineItems... *Make objStockItem and objOrderLineItem - add to objOrderLineItems_collection* 6) Selects second StockItem; add to OrderLineItems... *Make objStockItem and objOrderLineItem - add to objOrderLineItems_collection* 7) Order is committed, [stock decremented, order posted to DB, Order printed, Order Entered By = EmployeeID] Once posted to Db, all objects now redundant/garbage [except objEmployee?] 8) Employee is returned to MainMenu is objEmployee still valid as an object?

    Read the article

  • A good solution for displaying galleries with lytebox and php

    - by Johann
    Hello I have thought for a while over an issue with the loading of images on a website-solution that I have programmed (For fun and the experience) The programming language used is PHP with MYSQL as the database language It also uses javascript, but not extensively I have recently realized that the engine I programmed, while it has it's smart solutions also carry a lot of flaws and redundant code. I have therefore decided to make a new one, now incorporating what I know, but didn't when I started the previous project. For the new system, there will be an option to add galleries to a site, and upload images to it. I have used the javascript image viewer Lytebox before. The screen goes dark and an image appears with a "Previous" and "next" button to view the other images. The problem is that I used groups with lytebox and the images themselves, resized as thumbs. This causes lytebox to work only when all the images have loaded. If you click a link before that, the image is shown as if you right click and choose "Show image" Information about these images is parsed from a database using a while statement with a counter that goes from 0 to sizeof() I'm thinking it probably isn't a good idea to have the images as the thumbs, even if you restrict the upload size. Likewise, adding thumbs at upload also seems like a hassle. It would be practical if the thumbs didn't show up before they were fully loaded. Has anyone got any good tips. Any help would be appreciated. Johann

    Read the article

  • Using CSS to insert text

    - by abelenky
    I'm relatively new to CSS, and have used it to change the style and formatting of text. I would now like to use it to insert text as shown below: <span class="OwnerJoe">reconcile all entries</span> Which I hope I could get to show as: Joe's Task: reconcile all entries That is, simply by virtue of being of class "Owner Joe", I want the text Joe's Task: to be displayed. I could do it with code like: <span class="OwnerJoe">Joe's Task:</span> reconcile all entries. But that seems awfully redundant to both specify the class and the text. Is it possible to do what I'm looking for? EDIT One idea is to try to set it up as a ListItem <li> where the "bullet" is the text "Joe's Task". I see examples of how to set various bullet-styles and even images for bullets. Is it possible to use a small block of text for the list-bullet?

    Read the article

  • django simple approach to multi-field search

    - by Scott Willman
    I have a simple address book app that I want to make searchable. The model would look something like: class Address(models.Model): address1 = models.CharField("Address Line 1", max_length=128) address2 = models.CharField("Address Line 2", max_length=128) city = models.CharField("City", max_length=128) state = models.CharField("State", max_length=24) zipCode = models.CharField("Zip Code", max_length=24) def __unicode__(self): return "%s %s, %s, %s, %s" % (self.address1, self.address2, self.city, self.state, self.zipCode) class Entry(models.Model): name = models.CharField("Official School Name", max_length=128) createdBy = models.ForeignKey(User) address = models.ForeignKey(Address, unique=True) def __unicode__(self): return "%s - %s, %s" % (self.name, self.address.city, self.address.state) I want the searching to be fairly loose, like: Bank of America Los Angeles 91345. It seems like I want a field that contains all of those elements into one that I can search, but that also seems redundant. I was hoping I could add a method to the Entry model like this: def _getSearchText(self): return "%s %s %s" % (self.name, self.address, self.mascot) searchText = property(_getSearchText) ...and search that as a field, but I suppose that's wishful thinking... How should I approach this using basic Django and SqLite (this is a learning exercise). Thank you!!

    Read the article

  • Python and the self parameter

    - by Svend
    I'm having some issues with the self parameter, and some seemingly inconsistent behavior in Python is annoying me, so I figure I better ask some people in the know. I have a class, Foo. This class will have a bunch of methods, m1, through mN. For some of these, I will use a standard definition, like in the case of m1 below. But for others, it's more convinient to just assign the method name directly, like I've done with m2 and m3. import os def myfun(x, y): return x + y class Foo(): def m1(self, y, z): return y + z + 42 m2 = os.access m3 = myfun f = Foo() print f.m1(1, 2) print f.m2("/", os.R_OK) print f.m3(3, 4) Now, I know that os.access does not take a self parameter (seemingly). And it still has no issues with this type of assignment. However, I cannot do the same for my own modules (imagine myfun defined off in mymodule.myfun). Running the above code yields the following output: 3 True Traceback (most recent call last): File "foo.py", line 16, in <module> print f.m3(3, 4) TypeError: myfun() takes exactly 2 arguments (3 given) The problem is that, due to the framework I work in, I cannot avoid having a class Foo at least. But I'd like to avoid having my mymodule stuff in a dummy class. In order to do this, I need to do something ala def m3(self,a1, a2): return mymodule.myfun(a1,a2) Which is hugely redundant when you have like 20 of them. So, the question is, either how do I do this in a totally different and obviously much smarter way, or how can I make my own modules behave like the built-in ones, so it does not complain about receiving 1 argument too many.

    Read the article

  • Database design - table relationship question

    - by iama
    I am designing schema for a simple quiz application. It has 2 tables - "Question" and "Answer Choices". Question table has 'question ID', 'question text' and 'answer id' columns. "Answer Choices" table has 'question ID', 'answer ID' and 'answer text' columns. With this simple schema it is obvious that a question can have multiple answer choices & hence the need for the answer choices table. However, a question can have only one correct answer and hence the need for the 'answer ID' in the question table. However, this 'answer ID' column in the question table provides a illusion as though there can be multiple questions for a single answer which is not correct. The other alternative to eliminate this illusion is to have another table just for correct answer that will have just 2 columns namely the question ID and the answer ID with a 1-1 relationship between the two tables. However, I think this is redundant. Any recommendation on how best to design this thereby enforcing the rules that a question can have multiple answer choices but only one correct answer? Many Thanks.

    Read the article

  • What does it mean for an OS to "execute within user processes"? Do any modern OS's use that approach

    - by Chris Cooper
    I have recently become interested in operating system, and a friend of mine lent me a book called Operating Systems: Internals and Design Principles (I have the third edition), published in 1998. It's been a very interesting book so far, but I have come to the part dealing with process control, and it's using UNIX System V as one of its examples of an operating system that executes within user processes. This concept has struck me as a little strange. First of all, does this mean that OS instructions and data are stored in each user of the processes? Probably not, because that would be an absurdly redundant scheme. But if not, then what does it mean to "execute within" a user process? Do any modern operating systems use this approach? It seems much more logical to have the operating system execute as its own process, or even independently of all processes, if you're short on memory. All the inter-accessiblilty of process data required for this layout seems to greatly complicate things. (But maybe that's just because I don't quite get the concept ;D) Here is what the book says: "Execution within User Processes: An alternative that is common with operation systems on smaller machines is to execute virtually all operating system software in the context of a user process. ... "

    Read the article

  • What is it in the CSS/DOM that prevents an input box with display: block from expanding to the size of its container

    - by Steven Xu
    Sample HTML/CSS: <div class="container"> <input type="text" /> <div class="filler"></div> </div> div.container { padding: 5px; border: 1px solid black; background-color: gray; } div.filler { background-color: red; height: 5px; } input { display: block; } http://jsfiddle.net/bPEkb/3/ Question Why doesn't the input box expand to have the same outer width as, say div.filler? That is to say, why doesn't the input box expand to fit its container like other block elements with width: auto; do? I tried checking the "User Agent CSS" in Firebug to see if I could come up with something there. No luck. I couldn't find any specific differences in CSS that I could specifically link to the input box behaving differently from the regular div.filler. Besides curiousity, I'd like to know why this is to get to the bottom of it to figure out a way to set width once and forget it. My current practice of explicitly setting the width of both input and its containing block element seems redundant and less than modular. While I'm familiar with the technique of wrapping the input element in a div then assigning to the input element negative margins, this seems quite undesirable.

    Read the article

  • Right way to return proxy model instance from a base model instance in Django ?

    - by sotangochips
    Say I have models: class Animal(models.Model): type = models.CharField(max_length=255) class Dog(Animal): def make_sound(self): print "Woof!" class Meta: proxy = True class Cat(Animal): def make_sound(self): print "Meow!" class Meta: proxy = True Let's say I want to do: animals = Animal.objects.all() for animal in animals: animal.make_sound() I want to get back a series of Woofs and Meows. Clearly, I could just define a make_sound in the original model that forks based on animal_type, but then every time I add a new animal type (imagine they're in different apps), I'd have to go in and edit that make_sound function. I'd rather just define proxy models and have them define the behavior themselves. From what I can tell, there's no way of returning mixed Cat or Dog instances, but I figured maybe I could define a "get_proxy_model" method on the main class that returns a cat or a dog model. Surely you could do this, and pass something like the primary key and then just do Cat.objects.get(pk = passed_in_primary_key). But that'd mean doing an extra query for data you already have which seems redundant. Is there any way to turn an animal into a cat or a dog instance in an efficient way? What's the right way to do what I want to achieve?

    Read the article

  • Best Practice: Access form elements by HTML id or name attribute?

    - by seth
    As any seasoned JavaScript developer knows, there are many (too many) ways to do the same thing. For example, say you have a text field as follows: <form name="myForm"> <input type="text" name="foo" id="foo" /> There are many way to access this in JavaScript: [1] document.forms[0].elements[0]; [2] document.myForm.foo; [3] document.getElementById('foo'); [4] document.getElementById('myForm').foo; ... and so on ... Methods [1] and [3] are well documented in the Mozilla Gecko documentation, but neither are ideal. [1] is just too general to be useful and [3] requires both an id and a name (assuming you will be posting the data to a server side language). Ideally, it would be best to have only an id attribute or a name attribute (having both is somewhat redundant, especially if the id isn't necessary for any css, and increases the likelihood of typos, etc). [2] seems to be the most intuitive and it seems to be widely used, but I haven't seen it referenced in the Gecko documentation and I'm worried about both forwards compatibility and cross browser compatiblity (and of course I want to be as standards compliant as possible). So what's best practice here? Can anyone point to something in the DOM documentation or W3C specification that could resolve this? Note I am specifically interested in a non-library solution (jQuery/Prototype).

    Read the article

  • Convert image color space and output separate channels in OpenCV

    - by Victor May
    I'm trying to reduce the runtime of a routine that converts an RGB image to a YCbCr image. My code looks like this: cv::Mat input(BGR->m_height, BGR->m_width, CV_8UC3, BGR->m_imageData); cv::Mat output(BGR->m_height, BGR->m_width, CV_8UC3); cv::cvtColor(input, output, CV_BGR2YCrCb); cv::Mat outputArr[3]; outputArr[0] = cv::Mat(BGR->m_height, BGR->m_width, CV_8UC1, Y->m_imageData); outputArr[1] = cv::Mat(BGR->m_height, BGR->m_width, CV_8UC1, Cr->m_imageData); outputArr[2] = cv::Mat(BGR->m_height, BGR->m_width, CV_8UC1, Cb->m_imageData); split(output,outputArr); But, this code is slow because there is a redundant split operation which copies the interleaved RGB image into the separate channel images. Is there a way to make the cvtColor function create an output that is already split into channel images? I tried to use constructors of the _OutputArray class that accepts a vector or array of matrices as an input, but it didn't work.

    Read the article

  • Could I do this blind relative to absolute path conversion (for perforce depot paths) better?

    - by wonderfulthunk
    I need to "blindly" (i.e. without access to the filesystem, in this case the source control server) convert some relative paths to absolute paths. So I'm playing with dotdots and indices. For those that are curious I have a log file produced by someone else's tool that sometimes outputs relative paths, and for performance reasons I don't want to access the source control server where the paths are located to check if they're valid and more easily convert them to their absolute path equivalents. I've gone through a number of (probably foolish) iterations trying to get it to work - mostly a few variations of iterating over the array of folders and trying delete_at(index) and delete_at(index-1) but my index kept incrementing while I was deleting elements of the array out from under myself, which didn't work for cases with multiple dotdots. Any tips on improving it in general or specifically the lack of non-consecutive dotdot support would be welcome. Currently this is working with my limited examples, but I think it could be improved. It can't handle non-consecutive '..' directories, and I am probably doing a lot of wasteful (and error-prone) things that I probably don't need to do because I'm a bit of a hack. I've found a lot of examples of converting other types of relative paths using other languages, but none of them seemed to fit my situation. These are my example paths that I need to convert, from: //depot/foo/../bar/single.c //depot/foo/docs/../../other/double.c //depot/foo/usr/bin/../../../else/more/triple.c to: //depot/bar/single.c //depot/other/double.c //depot/else/more/triple.c And my script: begin paths = File.open(ARGV[0]).readlines puts(paths) new_paths = Array.new paths.each { |path| folders = path.split('/') if ( folders.include?('..') ) num_dotdots = 0 first_dotdot = folders.index('..') last_dotdot = folders.rindex('..') folders.each { |item| if ( item == '..' ) num_dotdots += 1 end } if ( first_dotdot and ( num_dotdots > 0 ) ) # this might be redundant? folders.slice!(first_dotdot - num_dotdots..last_dotdot) # dependent on consecutive dotdots only end end folders.map! { |elem| if ( elem !~ /\n/ ) elem = elem + '/' else elem = elem end } new_paths << folders.to_s } puts(new_paths) end

    Read the article

  • Am I understanding premature optimization correctly?

    - by Ed Mazur
    I've been struggling with an application I'm writing and I think I'm beginning to see that my problem is premature optimization. The perfectionist side of me wants to make everything optimal and perfect the first time through, but I'm finding this is complicating the design quite a bit. Instead of writing small, testable functions that do one simple thing well, I'm leaning towards cramming in as much functionality as possible in order to be more efficient. For example, I'm avoiding multiple trips to the database for the same piece of information at the cost of my code becoming more complex. One part of me wants to just not worry about redundant database calls. It would make it easier to write correct code and the amount of data being fetched is small anyway. The other part of me feels very dirty and unclean doing this. :-) I'm leaning towards just going to the database multiple times, which I think is the right move here. It's more important that I finish the project and I feel like I'm getting hung up because of optimizations like this. My question is: is this the right strategy to be using when avoiding premature optimization?

    Read the article

  • How to synchronize Silverlight clients with WCF?

    - by user564226
    Hi, this is probably only some conceptual problem, but I cannot seem to find the ideal solution. I'd like to create a Silverlight client application that uses WCF to control a third party application via some self written webservice. If there is more than one Silverlight client, all clients should be synchronized, i.e. parameter changes from one client should be propagated to all clients. I set up a very simple Silverlight GUI that manipulates parameters which are passed to the server (class inherits INotifyPropertyChanged): public double Height { get { return frameworkElement.Height; } set { if (frameworkElement.Height != value) { frameworkElement.Height = value; OnPropertyChanged("Height", value); } } } OnPropertyChanged is responsible for transferring data. The WCF service (duplex net.tcp) maintains a list of all clients and as soon as it receives a data packet (XElement with parameter change description) it forwards this very packet to all clients but the one the packet was received from. The client receives the package, but now I'm not sure, what's the best way to set the property internally. If I use "Height" (see above) a new change message would be generated and sent to all other clients a.s.o. Maybe I could use the data field (frameworkElement.Height) itself or a function - but I'm not sure whether there would arise problems with data binding later on. Also I don't want to simply copy parts of the code properties, to prevent bugs with redundant code. So what would you recommend? Thanks!

    Read the article

  • JQuery AJAX returned too much data that I had not requested

    - by kwokwai
    Hi all, I am using CakePHP 1.26 and CDN JQuery in this URL: http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js In a HTML web page, I have these lines of code: $.ajax({ type: "POST", url: "http://mywebsite.com/controllers/avail/"+curl, success: function(data) { alert(data);} }); and in the PHP page, I got another few lines of code: function avail($uname){ $result1=$this->Site1->User->findByusername($uname); if($result1){ return 1; } else{ return 0; } } As you see, the Avail function will return either zero or one. But there was some redundant data returned from the server, what I saw in the Alert box was somthing like this (rather than 0 or 1): <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Missing Method in Controller</title> <meta http-equiv="content-type" content="text/html; charset=utf-8"> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js"></script> <style type="text/css"> /* CSS Document */ /*PAGE LAYOUT*/ 0

    Read the article

  • Avoid an "out of memory error" in Java(eclipse), when using large data structure?

    - by gnomed
    OK, so I am writing a program that unfortunately needs to use a huge data structure to complete its work, but it is failing with a "out of memory error" during its initialization. While I understand entirely what that means and why it is a problem, I am having trouble overcoming it, since my program needs to use this large structure and I don't know any other way to store it. The program first indexes a large corpus of text files that I provide. This works fine. Then it uses this index to initialize a large 2D array. This array will have nXn entries, where "n" is the number of unique words in the corpus of text. For the relatively small chunk I am testing it on(about 60 files) it needs to make approximately 30,000x30,000 entries. this will probably be bigger once I run it on my full intended corpus too. It consistently fails every time, after it indexes, while it is initializing the data structure(to be worked on later). Things I have done include: revamp my code to use a primitive "int[]" instead of a "TreeMap" eliminate redundant structures, etc... Also, I have run eclipse with "eclipse -vmargs -Xmx2g" to max out my allocated memory I am fairly confident this is not going to be a simple line of code solution, but is most likely going to require a very new approach. I am looking for what that approach is, any ideas? Thanks, B.

    Read the article

  • Help with Java Generics: Cannot use "Object" as argument for "? extends Object"

    - by AniDev
    Hello, I have the following code: import java.util.*; public class SellTransaction extends Transaction { private Map<String,? extends Object> origValueMap; public SellTransaction(Map<String,? extends Object> valueMap) { super(Transaction.Type.Sell); assignValues(valueMap); this.origValueMap=valueMap; } public SellTransaction[] splitTransaction(double splitAtQuantity) { Map<String,? extends Object> valueMapPart1=origValueMap; valueMapPart1.put(nameMappings[3],(Object)new Double(splitAtQuantity)); Map<String,? extends Object> valueMapPart2=origValueMap; valueMapPart2.put(nameMappings[3],((Double)origValueMap.get(nameMappings[3]))-splitAtQuantity); return new SellTransaction[] {new SellTransaction(valueMapPart1),new SellTransaction(valueMapPart2)}; } } The code fails to compile when I call valueMapPart1.put and valueMapPart2.put, with the error: The method put(String, capture#5-of ? extends Object) in the type Map is not applicable for the arguments (String, Object) I have read on the Internet about generics and wildcards and captures, but I still don't understand what is going wrong. My understanding is that the value of the Map's can be any class that extends Object, which I think might be redundant, because all classes extend Object. And I cannot change the generics to something like ? super Object, because the Map is supplied by some library. So why is this not compiling? Also, if I try to cast valueMap to Map<String,Object>, the compiler gives me that 'Unchecked conversion' warning. Thanks!

    Read the article

  • Auditing front end performance on web application

    - by user1018494
    I am currently trying to performance tune the UI of a company web application. The application is only ever going to be accessed by staff, so the speed of the connection between the server and client will always be considerably more than if it was on the internet. I have been using performance auditing tools such as Y Slow! and Google Chrome's profiling tool to try and highlight areas that are worth targeting for investigation. However, these tools are written with the internet in mind. For example, the current suggestions from a Google Chrome audit of the application suggests is as follows: Network Utilization Combine external CSS (Red warning) Combine external JavaScript (Red warning) Enable gzip compression (Red warning) Leverage browser caching (Red warning) Leverage proxy caching (Amber warning) Minimise cookie size (Amber warning) Parallelize downloads across hostnames (Amber warning) Serve static content from a cookieless domain (Amber warning) Web Page Performance Remove unused CSS rules (Amber warning) Use normal CSS property names instead of vendor-prefixed ones (Amber warning) Are any of these bits of advice totally redundant given the connection speed and usage pattern? The users will be using the application frequently throughout the day, so it doesn't matter if the initial hit is large (when they first visit the page and build their cache) so long as a minimal amount of work is done on future page views. For example, is it worth the effort of combining all of our CSS and JavaScript files? It may speed up the initial page view, but how much of a difference will it really make on subsequent page views throughout the working day? I've tried searching for this but all I keep coming up with is the standard internet facing performance advice. Any advice on what to focus my performance tweaking efforts on in this scenario, or other auditing tool recommendations, would be much appreciated.

    Read the article

  • reading newlines with FORMAT statement

    - by peter.murray.rust
    I'm writing a preprocessor and postprocessor for Fortran input and output using FORMAT-like statements (there are reasons not to use a FORTRAN library). I want to treat the new line ("/") character correctly. I don't have a Fortran compiler immediately to hand. Is there a simple algorithm for working out how many newlines are written or consumed (This post just gives reading examples) [Please assume a FORTRAN77-like mentality in the FORTRAN code and correct any FORTRAN syntax on my part] UPDATE: no comments yet so I am reduced to finding a compiler and running it myself. I'll post the answers if I'm not beaten to it. No-one commented I had the format syntax wrong. I've changed it but there may still be errors Assume datafile 1 a b c d etc... (a) does the READ command always consume a newline? does READ(1, '(A)') A READ(1, '(A)') B give A='a' and B='b' (b) what does READ(1,'(A,/)') A READ(1,'(A)') B give for B? (I would assume 'c') (c) what does READ(1, '(/)') READ(1, '(A)') A give for A (is it 'b' or 'c') (d) what does READ(1,'(A,/,A)') A, B READ(1,'(A)') C give for A and B and C(can I assume 'a' and 'b' and 'c') (e) what does READ(1,'(A,/,/,A)') A, B READ(1,'(A)') C give for A and B and C(can I assume 'a' and 'c' and 'd')? Are there any cases in which the '/' is redundant?

    Read the article

  • Merge computed data from two tables back into one of them

    - by Tyler McHenry
    I have the following situation (as a reduced example). Two tables, Measures1 and Measures2, each of which store an ID, a Weight in grams, and optionally a Volume in fluid onces. (In reality, Measures1 has a good deal of other data that is irrelevant here) Contents of Measures1: +----+----------+--------+ | ID | Weight | Volume | +----+----------+--------+ | 1 | 100.0000 | NULL | | 2 | 200.0000 | NULL | | 3 | 150.0000 | NULL | | 4 | 325.0000 | NULL | +----+----------+--------+ Contents of Measures2: +----+----------+----------+ | ID | Weight | Volume | +----+----------+----------+ | 1 | 75.0000 | 10.0000 | | 2 | 400.0000 | 64.0000 | | 3 | 100.0000 | 22.0000 | | 4 | 500.0000 | 100.0000 | +----+----------+----------+ These tables describe equivalent weights and volumes of a substance. E.g. 10 fluid ounces of substance 1 weighs 75 grams. The IDs are related: ID 1 in Measures1 is the same substance as ID 1 in Measures2. What I want to do is fill in the NULL volumes in Measures1 using the information in Measures2, but keeping the weights from Measures1 (then, ultimately, I can drop the Measures2 table, as it will be redundant). For the sake of simplicity, assume that all volumes in Measures1 are NULL and all volumes in Measures2 are not. I can compute the volumes I want to fill in with the following query: SELECT Measures1.ID, Measures1.Weight, (Measures2.Volume * (Measures1.Weight / Measures2.Weight)) AS DesiredVolume FROM Measures1 JOIN Measures2 ON Measures1.ID = Measures2.ID; Producing: +----+----------+-----------------+ | ID | Weight | DesiredVolume | +----+----------+-----------------+ | 4 | 325.0000 | 65.000000000000 | | 3 | 150.0000 | 33.000000000000 | | 2 | 200.0000 | 32.000000000000 | | 1 | 100.0000 | 13.333333333333 | +----+----------+-----------------+ But I am at a loss for how to actually insert these computed values into the Measures1 table. Preferably, I would like to be able to do it with a single query, rather than writing a script or stored procedure that iterates through every ID in Measures1. But even then I am worried that this might not be possible because the MySQL documentation says that you can't use a table in an UPDATE query and a SELECT subquery at the same time, and I think any solution would need to do that. I know that one workaround might be to create a new table with the results of the above query (also selecting all of the other non-Volume fields in Measures1) and then drop both tables and replace Measures1 with the newly-created table, but I was wondering if there was any better way to do it that I am missing.

    Read the article

  • Resize an image and maintain quality?

    - by JasonS
    Hi, I have a problem with resizing images. What happens is that if you upload a file larger than the stated parameters, the image is cropped, then saved at 100% quality. So if I upload a large jpeg which is 272Kb. The image is cropped by 100 odd pixels. The file size then goes up to 1.2Mb. We are saving images at a 100% quality. I assume that this is what is causing the problem. The image is exported from Photoshop at 30% quality which reduces the file size. Resaving the image at 100% quality creates the same image but I assume with a lot of redundant file data. Has anyone encountered this before? Does anyone have a solution? This is what we are using. $source_im = imagecreatefromjpeg ($file); $dest_im = imagecreatetruecolor ($newsize_x, $newsize_y); imagecopyresampled ( $dest_im, $source_im, 0, 0, $offset_x, $offset_y, $newsize_x, $newsize_y, $sourceWidth, $sourceHeight ); imagedestroy ($source_im); if ($greyscale) { $dest_im = $this->imageconvertgreyscale ($dest_im); } imagejpeg($dest_im, $save_to_file, $quality); break;

    Read the article

  • What single software development tool do you think holds the most value?

    - by Phobis
    Every day I realize how much I love Visual Studio for .NET development.... but, I believe that Resharper, may hold a value that surpasses Visual Studio's (I am using VS 2005 for WPF/WCF development). I decided it would be great to compile a list of the most valuable tools for software development. These can be applications/plug-ins anything that you think holds GREAT value. Also, please explain the benefits of the tool that you are posting. Resharper: Intergrated Unit testing "Camel Hump" code auto completion Find "usings" (inverse of "Go to Deceleration") Code formating and member rearranging Assembly and namespace inclusion (based on your code) Check for common optimizations and possible bugs in code and suggests/rewrites the code for you (things like null checking, redundant delegate creation, inverting if statements, etc...); Tells you when code and be more generic (may suggest things like "use this interface instead" if your code never refers to something specific on an object) Helps you see code that is not being used and will clean any unused members. File structure view helps you jump around the regions of your file (this is really awesome and clean). Class searching (you can use things like camel humps) Asks you which partial file to open once you find a class. It also has it's own plugin support, so you can do things like FxCop, documentation and relfector (all free). This thing has so much I don't think I hit 10% of it yet :) [When I get time, I will try to add more... feel free to help me out]

    Read the article

  • How to access a web service behind a NAT?

    - by jr
    We have a product we are deploying to some small businesses. It is basically a RESTful API over SSL using Tomcat. This is installed on the server in the small business and is accessed via an iPhone or other device portable device. So, the devices connecting to the server could come from any number of IP addresses. The problem comes with the installation. When we install this service, it seems to always become a problem when doing port forwarding so the outside world can gain access to tomcat. It seems most time the owner doesn't know router password, etc, etc. I am trying to research other ways we can accomplish this. I've come up with the following and would like to hear other thoughts on the topic. Setup a SSH tunnel from each client office to a central server. Basically the remote devices would connect to that central server on a port and that traffic would be tunneled back to Tomcat in the office. Seems kind of redundant to have SSH and then SSL, but really no other way to accomplish it since end-to-end I need SSL (from device to office). Not sure of performance implications here, but I know it would work. Would need to monitor the tunnel and bring it back up if it goes done, would need to handle SSH key exchanges, etc. Setup uPNP to try and configure the hole for me. Would likely work most of the time, but uPNP isn't guaranteed to be turned on. May be a good next step. Come up with some type of NAT transversal scheme. I'm just not familiar with these and uncertain of how they exactly work. We have access to a centralized server which is required for the authentication if that makes it any easier. What else should I be looking at to get this accomplished?

    Read the article

  • Excel - Best Way to Connect With Access Data

    - by gamerzfuse
    Hello there, Here is the situation we have: a) I have an Access database / application that records a significant amount of data. Significant fields would be hours, # of sales, # of unreturned calls, etc b) I have an Excel document that connects to the Access database and pulls data in to visualize it As it stands now, the Excel file has a Refresh button that loads new data. The data is loaded into a large PivotTable. The main 'visual form' then uses VLOOKUP to get the results from the form, based on the related hours. This operation is slow (~10 seconds) and seems to be redundant and inefficient. Is there a better way to do this? I am willing to go just about any route - just need directions. Thanks in advance! Update: I have confirmed (due to helpful comments/responses) that the problem is with the data loading itself. removing all the VLOOKUPs only took a second or two out of the load time. So, the questions stands as how I can rapidly and reliably get the data without so much time involvement (it loads around 3000 records into the PivotTables).

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >