Search Results

Search found 5147 results on 206 pages for '3ds max'.

Page 177/206 | < Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >

  • Strange numbers in java socket output

    - by user293163
    I have small test app: Socket socket = new Socket("jeck.ru", 80); PrintWriter pw = new PrintWriter(socket.getOutputStream(), false); pw.println("GET /ip/ HTTP/1.1"); pw.println("Host: jeck.ru"); pw.println(); pw.flush(); BufferedReader rd = new BufferedReader(new InputStreamReader(socket.getInputStream())); String str; while ((str = rd.readLine()) != null) { System.out.println(str); } It`s output: HTTP/1.1 200 OK Date: Sat, 13 Mar 2010 22:06:51 GMT Content-Type: text/html;charset=utf-8 Transfer-Encoding: chunked Connection: keep-alive Keep-Alive: timeout=5 Server HTTP/1.1 200 OK Date: Sat, 13 Mar 2010 22:06:51 GMT Content-Type: text/html;charset=utf-8 Transfer-Encoding: chunked Connection: keep-alive Keep-Alive: timeout=5 Server: Apache Cache-Control: max-age=0 Expires: Sat, 13 Mar 2010 22:06:51 GMT 123 <!DOCTYPE html> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <title>??? IP</title> </head> <body> <div style='text-align: center; font: 32pt Verdana;margin-top: 300px'> ??? IP &#151; 94.103.87.153 </div> </body> </html> 0 Whence these numbers (123 an 0) takes?

    Read the article

  • [PHP, CSS, & ?] fixed width div, resizing text on the fly based on length

    - by Andrew Heath
    Let's say you've got a simple fixed-width layout that pulls a title from a MySQL database. CSS: #wrapper { width: 800px; } h1 { width: 100%; } HTML: <html> <body> <div id="wrapper"> <h1> $titleString </h1> </div> </body> </html> But the catch is, the length of the title string pulled from your MySQL database varies wildly. Sometimes it might be 10 characters, sometimes it might be 80. It's possible to establish a min & max character count. How, if at all possible, do I get the text-size of my <h1>$titleString</h1> to enlarge/decrease on-the-fly such that the string is only ever on one line and best fit to that line length? I've seen a lot of questions about resizing the div - but in my case the div must always be 100% (800px) and I want to best-fit the title. Obviously a maximum text-size value would have to be set so 5 character strings don't become gargantuan. Does anyone have a suggestion? I'm only using PHP/MySQL/CSS on this page at the moment, but incorporation of another language is fine if it means I can solve the problem. The only thing I can think of is a bruteforce approach whereby through trial and error I establish acceptable string character count ranges matched with CSS em sizes, but that'd be a pretty ugly implementation from the code side.

    Read the article

  • Python to Java translation

    - by obelix1337
    Hello, i get quite short code of algorithm in python, but i need to translate it to Java. I didnt find any program to do that, so i will really appreciate to help translating it. I learned python a very little to know the idea how algorithm work. The biggest problem is because in python all is object and some things are made really very confuzing like sum(self.flow[(source, vertex)] for vertex, capacity in self.get_edges(source)) and "self.adj" is like hashmap with multiple values which i have no idea how to put all together. Is any better collection for this code in java? code is: [CODE] class FlowNetwork(object): def __init__(self): self.adj, self.flow, = {},{} def add_vertex(self, vertex): self.adj[vertex] = [] def get_edges(self, v): return self.adj[v] def add_edge(self, u,v,w=0): self.adj[u].append((v,w)) self.adj[v].append((u,0)) self.flow[(u,v)] = self.flow[(v,u)] = 0 def find_path(self, source, sink, path): if source == sink: return path for vertex, capacity in self.get_edges(source): residual = capacity - self.flow[(source,vertex)] edge = (source,vertex,residual) if residual > 0 and not edge in path: result = self.find_path(vertex, sink, path + [edge]) if result != None: return result def max_flow(self, source, sink): path = self.find_path(source, sink, []) while path != None: flow = min(r for u,v,r in path) for u,v,_ in path: self.flow[(u,v)] += flow self.flow[(v,u)] -= flow path = self.find_path(source, sink, []) return sum(self.flow[(source, vertex)] for vertex, capacity in self.get_edges(source)) g = FlowNetwork() map(g.add_vertex, ['s','o','p','q','r','t']) g.add_edge('s','o',3) g.add_edge('s','p',3) g.add_edge('o','p',2) g.add_edge('o','q',3) g.add_edge('p','r',2) g.add_edge('r','t',3) g.add_edge('q','r',4) g.add_edge('q','t',2) print g.max_flow('s','t') [/CODE] result of this example is "5". algorithm find max flow in graph(linked list or whatever) from source vertex "s" to destination "t". Many thanx for any idea

    Read the article

  • Why is it that an int in C++ that isnt initialized (then used) doesn't return an error?

    - by omizzle
    I am new to C++ (just starting). I come from a Java background and I was trying out the following piece of code that would sum the numbers between 1 and 10 (inclusive) and then print out the sum: /* * File: main.cpp * Author: omarestrella * * Created on June 7, 2010, 8:02 PM */ #include <cstdlib> #include <iostream> using namespace std; int main() { int sum; for(int x = 1; x <= 10; x++) { sum += x; } cout << "The sum is: " << sum << endl; return 0; } When I ran it it kept printing 32822 for the sum. I knew the answer was supposed to be 55 and realized that its print the max value for a short (32767) plus 55. Changing int sum; to int sum = 0; would work (as it should, since the variable needs to be initialized!). Why does this behavior happen, though? Why doesnt the compiler warn you about something like this? I know Java screams at you when something isnt initialized. Thank you.

    Read the article

  • Windows 7 taskbar icon grouping with multiple similar windows

    - by user1266104
    When I have a number of similar windows opens, for example, multiple explorer windows, they are all grouped into the same icon on the taskbar. When I hover over this I get a thumbnail of the window, and a piece of truncated text which is supposed to help me work out what that window is. However I also like to have the full path shown in explorer windows, so the truncated text is usually C:\CommonPathToEveryWind... I have noticed that if I have over 14 explorer windows open, then Windows gives up trying to display these useless thumbnails, and instead gives me a nicely formatted list of paths. My question is how can I customise this behaviour, to either disable thumbnails all together for a subset of applications where a thumbnail is inappropriate (explorer, 'Everything'); or to lower the max number of thumbnails per grouped taskbar icon to 2; or just to disable thumbnails all together, (without loosing the entire windows theme) Edit: Just to make it clear what I currently get, and what I actually want. I do still want to keep the grouping behaviour, so that multiple instances of the same program, Explorer for example, only take one slot on the taskbar. What I want is to alter what is displayed when I hover over the grouped icon: What I actually see - useless thumbnails:- http://i.stack.imgur.com/8bTxX.png The style I want for any number of instances:- http://i.stack.imgur.com/zv6Si.png

    Read the article

  • Slow T-SQL query, convert to LINQ to Object

    - by yimbot
    I have a T-SQL query which populates a DataSet from an MSSQL database. string qry = "SELECT * FROM EvnLog AS E WHERE TimeDate = (SELECT MAX(TimeDate) From EvnLog WHERE Code = E.Code) AND (Event = 8) AND (TimeDate BETWEEN '" + Start + "' AND '" + Finish + "')" The database is quite large and being the type of nested query it is, the Data Adapter can take a number of minutes to fill the DataSet. I have extended the DataAdapter's timeout value to 480 seconds to combat it, but the client still complains about slow performance and occassional timeouts. To combat this, I was considering executing a simpler query (ie. just taking the date range) and then populating a Generic List which I could then execute a Linq query against. The simple query executes very quickly which is great. However, I cannot seem to build a Linq query which generates the same results as the T-SQL query above. Is this the best solution to this problem? Can anyone provide tips on rewriting the above T-SQL into Linq? I have also considered using a DataView, but cannot seem to get the results from that either.

    Read the article

  • Spring MessageSource not being used during validation

    - by Jeremy
    I can't get my messages in messages.properties to be used during Spring validation of my form backing objects. app-config.xml: <bean id="messageSource" class="org.springframework.context.support.ResourceBundleMessageSource"> <property name="basename" value="messages" /> </bean> WEB-INF/classes/messages.properties: NotEmpty=This field should not be empty. Form Backing Object: ... @NotEmpty @Size(min=6, max=25) private String password; ... When I loop through all errors in the BindingResult and output the ObjectError's toString I get this: Field error in object 'settingsForm' on field 'password': rejected value []; codes [NotEmpty.settingsForm.password,NotEmpty.password,NotEmpty.java.lang.String,NotEmpty]; arguments [org.springframework.context.support.DefaultMessageSourceResolvable: codes [settingsForm.password,password]; arguments []; default message [password]]; default message [may not be empty] As you can see the default message is "may not be empty" instead of my message "This field should not be empty". I do get my correct message if I inject the messageSource into a controller and output this: messageSource.getMessage("NotEmpty", new Object [] {"password"}, "default empty message", null); So why isn't the validation using my messages.properties? I'm running Spring 3.1.1. Thanks!

    Read the article

  • Keeping User-Input UITextVIew Content Constrained to Its Own Frame

    - by siglesias
    Trying to create a large textbox of fixed size. This problem is very similar to the 140 character constraint problem, but instead of stopping typing at 140 characters, I want to stop typing when the edge of the textView's frame is reached, instead of extending below into the abyss. Here is what I've got for the delegate method. Seems to always be off by a little bit. Any thoughts? - (BOOL)textView:(UITextView *)textView shouldChangeTextInRange:(NSRange)range replacementText:(NSString *)text { BOOL edgeBump = NO; CGSize constraint = textView.frame.size; CGSize size = [[textView.text stringByAppendingString:text] sizeWithFont:textView.font constrainedToSize:constraint lineBreakMode:UILineBreakModeWordWrap]; CGFloat height = size.height; if (height > textView.frame.size.height) { edgeBump = YES; } if([text isEqualToString:@"\b"]){ return YES; } else if(edgeBump){ NSLog(@"EDGEBUMP!"); return NO; } return YES; } EDIT: As per Max's suggestion below, here is the code that works: - (BOOL)textView:(UITextView *)textView shouldChangeTextInRange:(NSRange)range replacementText:(NSString *)text { CGSize constraint = textView.frame.size; NSString *whatWasThereBefore = textView.text; textView.text = [textView.text stringByReplacingCharactersInRange:range withString:text]; if (textView.contentSize.height >= constraint.height) { textView.text = whatWasThereBefore; } return NO; }

    Read the article

  • Conditionally colour data points outside of confidence bands in R

    - by D W
    I need to colour datapoints that are outside of the the confidence bands on the plot below differently from those within the bands. Should I add a separate column to my dataset to record whether the data points are within the confidence bands? Can you provide an example please? Example dataset: ## Dataset from http://www.apsnet.org/education/advancedplantpath/topics/RModules/doc1/04_Linear_regression.html ## Disease severity as a function of temperature # Response variable, disease severity diseasesev<-c(1.9,3.1,3.3,4.8,5.3,6.1,6.4,7.6,9.8,12.4) # Predictor variable, (Centigrade) temperature<-c(2,1,5,5,20,20,23,10,30,25) ## For convenience, the data may be formatted into a dataframe severity <- as.data.frame(cbind(diseasesev,temperature)) ## Fit a linear model for the data and summarize the output from function lm() severity.lm <- lm(diseasesev~temperature,data=severity) jpeg('~/Desktop/test1.jpg') # Take a look at the data plot( diseasesev~temperature, data=severity, xlab="Temperature", ylab="% Disease Severity", pch=16, pty="s", xlim=c(0,30), ylim=c(0,30) ) title(main="Graph of % Disease Severity vs Temperature") par(new=TRUE) # don't start a new plot ## Get datapoints predicted by best fit line and confidence bands ## at every 0.01 interval xRange=data.frame(temperature=seq(min(temperature),max(temperature),0.01)) pred4plot <- predict( lm(diseasesev~temperature), xRange, level=0.95, interval="confidence" ) ## Plot lines derrived from best fit line and confidence band datapoints matplot( xRange, pred4plot, lty=c(1,2,2), #vector of line types and widths type="l", #type of plot for each column of y xlim=c(0,30), ylim=c(0,30), xlab="", ylab="" )

    Read the article

  • Design for fastest page download

    - by mexxican
    I have a file with millions of URLs/IPs and have to write a program to download the pages really fast. The connection rate should be at least 6000/s and file download speed at least 2000 with avg. 15kb file size. The network bandwidth is 1 Gbps. My approach so far has been: Creating 600 socket threads with each having 60 sockets and using WSAEventSelect to wait for data to read. As soon as a file download is complete, add that memory address(of the downloaded file) to a pipeline( a simple vector ) and fire another request. When the total download is more than 50Mb among all socket threads, write all the files downloaded to the disk and free the memory. So far, this approach has been not very successful with the rate at which I could hit not shooting beyond 2900 connections/s and downloaded data rate even less. Can somebody suggest an alternative approach which could give me better stats. Also I am working windows server 2008 machine with 8 Gig of memory. Also, do we need to hack the kernel so as we could use more threads and memory. Currently I can create a max. of 1500 threads and memory usage not going beyond 2 gigs [ which technically should be much more as this is a 64-bit machine ]. And IOCP is out of question as I have no experience in that so far and have to fix this application today. Thanks Guys!

    Read the article

  • 32-bit JVM on 64-bit Windows crashes on launch with -Xmx1300m and plenty of free memory

    - by Konrad Garus
    I'm struggling with Java heap space settings. The default Java on Windows is the 32-bit client regardless of OS version (that's what Oracle recommends to all users). It appears to set max heap size to 256 MB by default, and that is too little for me. I use a custom launcher to start the application. I would like it to use more memory on computers with plenty RAM, and default to -Xmx512m on those with less RAM. As far as I'm aware, the only way is the static -Xmx setting (that has to be set on launch). I have a user who has 8 GB RAM, 64-bit Windows and 32-bit Java 7. Maximum memory visible to the JVM is 4G (as returned by querying OperatingSystemMXBean). I understand why, no issue. For some reason my application is unable to start for this user with -Xmx1300m, even though he has 2.3G free memory. He closed some applications (having 5G free memory), and still it would not launch. The error reported to me was: error occured during init of vm could not reserve enough space for object heap What's going on? Could it be that the 32-bit JVM is only able to address the "first" 4G of memory and has to have a 1300M block available within those first 4 gigabytes? How can I solve this problem, except for asking everyone to install 64-bit Java (what is unlikely to be acceptable)?

    Read the article

  • expand floating object when floating object within expands

    - by Scarface
    Hey guys, quick question, I have a link when clicked drops down a list. This list is floated to the right to position it properly. This list is in another box that has been floated. My problem is that when the list expands, the box does not and the list comes out of the container box, unless the list is not floated. However floating it seems like the only way to get it to the position I want. If anyone has any ideas on how to solve this problem I would appreciate it. .container-box { margin-top:0px; float:left; padding-left:5px; position:relative; } #box-within { float:right; font-weight:bold; max-height:250px; display: none; background-color:#fff; overflow: auto; width:325px; padding:5px; position:relative; }

    Read the article

  • CSS/JQUERY make div scrollable without showing scrollbar

    - by Ispuk
    is there any way to make a div scrollable with overflow-y:hidden; and overflow-x:hidden? i'm trying without success, maybe i need some js or jquery script? i mean, i would like to make div scroll on y axes without showing scrollbar on right side( as it is now). itryed: .get-list{ position:absolute; z-index:444; text-align: center; display: none; bottom:0; clear:both !important; left:0; right:0; top:11%; margin:0 auto; background:#fff; max-height:800px; overflow-y:no-display; overflow-x:hidden; display: block; } thanks EDIT log-widget-list{ position:absolute; z-index:444; text-align: center; display: none; width:99%; margin:0 auto; background:#fff; height:800px; overflow: hidden; } .log-widget-list .scroller{ overflow: scroll; height:800px; width:100%; } it shows right scrollbar anyway

    Read the article

  • Upload file using jquery dialog not working

    - by kandroid
    i want to upload file using jquery dialog. I have created a partial view and showing that partial view in the dialog. The problem is, when I directly browse the partial view and upload a file, it works perfect. BUT when I put the partial view inside a jquery dialog, it submits the form, but don't submits the file. So i have null value. I really dont understand what is the difference here!! Hope someone can help me to identify the problem. here is some codes; jquery codes: $('#UploadDialog').dialog({ autoOpen: false, width: 580, resizable: false, modal: true, open: function (event, ui) { $(this).load('@Url.Action("Upload","Notes")'); }, buttons: { "Upload": function () { $("#upload-message").html(''); $("#noteUploadForm").submit(); }, "Cancel": function () { $(this).dialog("close"); } } }); $(".uploadLink").click(function () { $('#UploadDialog').dialog('open'); }); return false; }); partial view: @using (Ajax.BeginForm("Upload", "Notes", null, new AjaxOptions { UpdateTargetId = "upload-message", InsertionMode = InsertionMode.Replace, HttpMethod = "POST", OnSuccess = "uploadSuccess" }, new { id = "noteUploadForm" , enctype="multipart/form-data"})) { <div> <div id="upload-message"></div> <div class="editLabel"> @Html.LabelFor(m => m.Notes.NoteTitle) </div> <div class="editText"> @Html.TextBoxFor(m => m.Notes.NoteTitle) </div> <div class="clear"></div> <div class="editLabel"> @Html.Label("Upload file") </div> <div class="editText"> <input type="file" name="file" />(100MB max size) </div> </div> }

    Read the article

  • Many users, many cpus, no delays. Good for cloud?

    - by Eric
    I wish to set up a CPU-intensive time-important query service for users on the internet. A usage scenario is described below. Is cloud computing the right way to go for such an implementation? If so, what cloud vendor(s) cater to this type of application? I ask specifically, in terms of: 1) pricing 2) latency resulting from: - slow CPUs, instance creations, JIT compiles, etc.. - internal management and communication of processes inside the cloud (e.g. a queuing process and a calculation process) - communication between cloud and end user 3) ease of deployment A usage scenario I am expecting is: - A typical user sends a query (XML of size around 1K) once every 30 seconds on average. - Each query requires a numerical computation of average time 0.2 sec and max time 1 sec on a 1 GHz Pentium. The computation requires no data other than the query itself and is performed by the same piece of code each time. - The delay a user experiences between sending a query and receiving a response should be on average no more than 2 seconds and in general no more than 5 seconds. - A background save to a DB of the response should occur (not time critical) - There can be up to 30000 simultaneous users - i.e., on average 1000 queries a second, each requiring an average 0.2 sec calculation, so that would necessitate around 200 CPUs. Currently I'm look at GAE Java (for quicker deployment and less IT hassle) and EC2 (Speed and price optimization) as options. Where can I learn more about the right way to set ups such a system? past threads, different blogs, books, etc.. BTW, if my terminology is wrong or confusing, please let me know. I'd greatly appreciate any help.

    Read the article

  • How can I allocate 8 GB ( not 1 GB ) RAM to my JDK on windows

    - by Deepak
    JDK on Windows takes max around 2 GB RAM. Even if we allocate more RAM to our JDK; it doesnt take it. If I need to run a process which need 8 GB RAM on Windows; how can I achieve it ? Do we have any JDK provided by any other provider which could support it ? Memcached provides us additional cache which can be used... but that is not I am looking for. Suppose I need to run my jMeter with 8 GB RAM on my windows box; Memcached wont help for sure.. Is there any provider which provides me with this ? Previosly I thought Terracotta does that; but looks like that also is like Memcached. I am using Windows 7. If needed I can use Windows Server also.. I just need to get it running.

    Read the article

  • drawing graphs in php

    - by user434885
    i am using php to generate graphs from arrays. I wish to create multiple graphs on the same page as i need to design a summary report from answers extracted from a database. Currently i am using this code and am only able to get one single graph. what additions to the code do i need to make to get multiple graphs? <?php function draw_graph($values) { // Get the total number of columns we are going to plot $columns = count($values); // Get the height and width of the final image $width = 300; $height = 200; // Set the amount of space between each column $padding = 5; // Get the width of 1 column $column_width = $width / $columns ; // Generate the image variables $im = imagecreate($width,$height); $gray = imagecolorallocate ($im,0xcc,0xcc,0xcc); $gray_lite = imagecolorallocate ($im,0xee,0xee,0xee); $gray_dark = imagecolorallocate ($im,0x7f,0x7f,0x7f); $white = imagecolorallocate ($im,0xff,0xff,0xff); // Fill in the background of the image imagefilledrectangle($im,0,0,$width,$height,$white); $maxv = 0; // Calculate the maximum value we are going to plot for($i=0;$i<$columns;$i++)$maxv = max($values[$i],$maxv); // Now plot each column for($i=0;$i<$columns;$i++) { $column_height = ($height / 100) * (( $values[$i] / $maxv) *100); $x1 = $i*$column_width; $y1 = $height-$column_height; $x2 = (($i+1)*$column_width)-$padding; $y2 = $height; imagefilledrectangle($im,$x1,$y1,$x2,$y2,$gray_dark); } header ("Content-type: image/png"); imagepng($im); imagedestroy($im); } $values = array("23","32","35","57","12"); $values2 = array("123","232","335","157","102"); draw_graph($values2); draw_graph($values);//no output is coming draw_graph($values2);//no output is coming draw_graph($values);//no output is coming ?>

    Read the article

  • Page URL and database organization.

    - by shurik2533
    I want that its name would be the page address. For example, if page has heading "Some Page", than its address should be http://somesite/some_page/. "some_page"-name generated by system automatically. "some_page" - is the unique identifier of page. The problem in that the user in the future can enter a name which already exists that will cause an error. It is necessary to find an optimum variant of the decision of a problem for great volumes of the data. I have solved a problem as follows: The page identifier in a database is the name of page and a suffix which is by default equal to zero. At page addition there is a check on existence. If such page does not exist, the suffix is equal 0 and its name is "some_page", if page is exist, than - search for the maximum number of a suffix and suffix=suffix+1 and page name become "some_page_1". For this I create in a database the compound key from fields "suffix" and "pageName": Table Pages suffix|pageName |pageTitle 0 |some_page |Some Page 1 |some_page |Some Page 0 |other_page|Other Page Addition of pages occurs through stored procedure: CREATE PROCEDURE addPage (pageNameVal VARCHAR(100), pageTitleVal VARCHAR(100)) BEGIN DECLARE v INT DEFAULT 0; SELECT MAX(suffix) FROM pages WHERE pageName=pageNameVal INTO v; IF v >= 0 THEN SET v = v + 1; ELSE SET v = 0; END IF; INSERT INTO pages (suffix, pageName) VALUES (pageNameVal, v, pageTitleVal); END; Whether there are more the best decisions?

    Read the article

  • Can I split a single SQL 2008 DB Table into multiple filegroups, based on a discriminator column?

    - by Pure.Krome
    Hi folks, I've got a SQL Server 2008 R2 database which has a number of tables. Two of these tables contains a lot of large data .. mainly because one of them is VARBINARY(MAX) and the sister table is GEOGRAPHY. (Why two tables? Read Below if you're interested***) The data in these tables are geospatial shapes, such as zipcode boundaries. Now, the first 70K odd rows are for DataType = 1 the rest 5mil rows are for DataType = 2 Now, is it possible to split the table data into two files? so all rows that are for DataType != 2 goes into File_A and DataType = 2 goes into File_B? This way, when I backup the DB, I can skip adding File_B so my download is waaaaay smaller? Is this possible? I guessing you might be thinking - why not keep them as TWO extra tables? Mainly because in the code, the data is conceptually the same .. it's just happens that I want to split the storage of this model data. It really messes up my model if I now how two aggregates in my model, instead of one. ***Entity Framework doesn't like Tables with GEOGRAPHY, so i have to create a new table which transforms the GEOGRAPHY to VARBINARY, and then drop that into EF.

    Read the article

  • Progressbar behaves strangely

    - by wanderameise
    I just created an application in C# that uses a thread which polls the UART for a receive event. If data is received an event is triggered in my main thread (GUI) and a progress bar is controlled via PerformStep() method (of course, I previously set the Max value accordingly). PerformStep is invoked using the following expression to handle cross threading this.Invoke((Action)delegate{progressBar2.PerformStep();}) When running this application the progressbar never hits its final value. It stops at 80%. When debugging and stopping at the line mentioned above, everything works fine using single steps. I have no idea what is going one! Start read thread on main thread: pThreadWrite = new Thread(new ThreadStart(ReadThread)); pThreadWrite.Start(); Read Thread: private void ReadThread() { while(1) { if (ReceiveEvent) { FlashProgressBar(); } } } Event that is triggered in main thread: private void FlashProgressBar() { this.Invoke((Action)delegate { progressBar2.PerformStep();}); } (It's a simplified representation of my code) It seems as if the internal progress is faster than the visual one.

    Read the article

  • R optimization: How can I avoid a for loop in this situation?

    - by chrisamiller
    I'm trying to do a simple genomic track intersection in R, and running into major performance problems, probably related to my use of for loops. In this situation, I have pre-defined windows at intervals of 100bp and I'm trying to calculate how much of each window is covered by the annotations in mylist. Graphically, it looks something like this: 0 100 200 300 400 500 600 windows: |-----|-----|-----|-----|-----|-----| mylist: |-| |-----------| So I wrote some code to do just that, but it's fairly slow and has become a bottleneck in my code: ##window for each 100-bp segment windows <- numeric(6) ##second track mylist = vector("list") mylist[[1]] = c(1,20) mylist[[2]] = c(120,320) ##do the intersection for(i in 1:length(mylist)){ st <- floor(mylist[[i]][1]/100)+1 sp <- floor(mylist[[i]][2]/100)+1 for(j in st:sp){ b <- max((j-1)*100, mylist[[i]][1]) e <- min(j*100, mylist[[i]][2]) windows[j] <- windows[j] + e - b + 1 } } print(windows) [1] 20 81 101 21 0 0 Naturally, this is being used on data sets that are much larger than the example I provide here. Through some profiling, I can see that the bottleneck is in the for loops, but my clumsy attempt to vectorize it using *apply functions resulted in code that runs an order of magnitude more slowly. I suppose I could write something in C, but I'd like to avoid that if possible. Can anyone suggest another approach that will speed this calculation up?

    Read the article

  • Connect 4 C# (How to draw the grid)

    - by Matt Wilde
    I've worked out most of the code and have several game classes. The one bit I'm stuck on at the moment, it how to draw the actual Connect 4 grid. Can anyone tell me what's wrong with this for loop? I get no errors but the grid doesn't appear. I'm using C#. private void Drawgrid() { Brush b = Brushes.Black; Pen p = Pens.Black; for (int xCoor = XStart, col = 0; xCoor < XStart + ColMax * DiscSpace; xCoor += DiscSpace, col++) // x coordinate beginning; while the x coordinate is smaller than the max column size, times it by // the space between each disc and then add the x coord to the disc space in order to create a new circle. for (int yCoor = YStart, row = RowMax - 1; yCoor < YStart + RowMax * DiscScale; yCoor += DiscScale, row--) { switch (Grid.State[row, col]) { case GameGrid.Gridvalues.Red: b = Brushes.Red; break; case GameGrid.Gridvalues.Yellow: b = Brushes.Yellow; break; case GameGrid.Gridvalues.None: b = Brushes.Aqua; break; } MainDisplay.DrawEllipse(p, xCoor, yCoor, 50, 50); MainDisplay.FillEllipse(b, xCoor, yCoor, 50, 50); } Invalidate(); } Thanks.

    Read the article

  • Generating all possible subsets of a given QuerySet in Django

    - by Glen
    This is just an example, but given the following model: class Foo(models.model): bar = models.IntegerField() def __str__(self): return str(self.bar) def __unicode__(self): return str(self.bar) And the following QuerySet object: foobar = Foo.objects.filter(bar__lt=20).distinct() (meaning, a set of unique Foo models with bar <= 20), how can I generate all possible subsets of foobar? Ideally, I'd like to further limit the subsets so that, for each subset x of foobar, the sum of all f.bar in x (where f is a model of type Foo) is between some maximum and minimum value. So, for example, given the following instance of foobar: >> print foobar [<Foo: 5>, <Foo: 10>, <Foo: 15>] And min=5, max=25, I'd like to build an object (preferably a QuerySet, but possibly a list) that looks like this: [[<Foo: 5>], [<Foo: 10>], [<Foo: 15>], [<Foo: 5>, <Foo: 10>], [<Foo: 5>, <Foo: 15>], [<Foo: 10>, <Foo: 15>]] I've experimented with itertools but it doesn't seem particularly well-suited to my needs. I think this could be accomplished with a complex QuerySet but I'm not sure how to start.

    Read the article

  • Slow query. Wrong database structure?

    - by Tin
    I have a database with table that contains tasks. Tasks have a lifecycle. The status of the task's lifecycle can change. These state transitions are stored in a separate table tasktransitions. Now I wrote a query to find all open/reopened tasks and recently changed tasks but I already see with a rather small number of tasks (<1000) that execution time has becoming very long (0.5s). Tasks +-------------+---------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------+---------+------+-----+---------+----------------+ | taskid | int(11) | NO | PRI | NULL | auto_increment | | description | text | NO | | NULL | | +-------------+---------+------+-----+---------+----------------+ Tasktransitions +------------------+-----------+------+-----+-------------------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------------+-----------+------+-----+-------------------+----------------+ | tasktransitionid | int(11) | NO | PRI | NULL | auto_increment | | taskid | int(11) | NO | MUL | NULL | | | status | int(11) | NO | MUL | NULL | | | description | text | NO | | NULL | | | userid | int(11) | NO | | NULL | | | transitiondate | timestamp | NO | | CURRENT_TIMESTAMP | | +------------------+-----------+------+-----+-------------------+----------------+ Query SELECT tasks.taskid,tasks.description,tasklaststatus.status FROM tasks LEFT OUTER JOIN ( SELECT tasktransitions.taskid,tasktransitions.transitiondate,tasktransitions.status FROM tasktransitions INNER JOIN ( SELECT taskid,MAX(transitiondate) AS lasttransitiondate FROM tasktransitions GROUP BY taskid ) AS tasklasttransition ON tasklasttransition.lasttransitiondate=tasktransitions.transitiondate AND tasklasttransition.taskid=tasktransitions.taskid ) AS tasklaststatus ON tasklaststatus.taskid=tasks.taskid WHERE tasklaststatus.status IS NULL OR tasklaststatus.status=0 or tasklaststatus.transitiondate>'2013-09-01'; I'm wondering if the database structure is best choice performance wise. Could adding indexes help? I already tried to add some but I don't see great improvements. +-----------------+------------+----------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment | +-----------------+------------+----------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | tasktransitions | 0 | PRIMARY | 1 | tasktransitionid | A | 896 | NULL | NULL | | BTREE | | | | tasktransitions | 1 | taskid_date_ix | 1 | taskid | A | 896 | NULL | NULL | | BTREE | | | | tasktransitions | 1 | taskid_date_ix | 2 | transitiondate | A | 896 | NULL | NULL | | BTREE | | | | tasktransitions | 1 | status_ix | 1 | status | A | 3 | NULL | NULL | | BTREE | | | +-----------------+------------+----------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ Any other suggestions?

    Read the article

  • All connections in pool are in use

    - by veljkoz
    We currently have a little situation on our hands - it seems that someone, somewhere forgot to close the connection in code. Result is that the pool of connections is relatively quickly exhausted. As a temporary patch we added Max Pool Size = 500; to our connection string on web service, and recycle pool when all connections are spent, until we figure this out. So far we have done this: SELECT SPId FROM MASTER..SysProcesses WHERE DBId = DB_ID('MyDb') and last_batch < DATEADD(MINUTE, -15, GETDATE()) to get SPID's that aren't used for 15 minutes. We're now trying to get the query that was executed last using that SPID with: DBCC INPUTBUFFER(61) but the queries displayed are various, meaning either something on base level regarding connection manipulation was broken, or our deduction is erroneous... Is there an error in our thinking here? Does the DBCC / sysprocesses give results we're expecting or is there some side-effect catch? (for example, connections in pool influence?) (please, stick to what we could find out using SQL since the guys that did the code are many and not all present right now)

    Read the article

< Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >