Search Results

Search found 9988 results on 400 pages for 'tv less in jersey'.

Page 302/400 | < Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >

  • Lucene.net create+lock errors in ASP.NET

    - by acidzombie24
    I have an issue with Lucene.net. It throws a lock exception. After poking around i notice these things. My code below works in an app bit when calling in Application_Start i get a NoSuchDirectoryException. Not closing the writer (as my code doesnt do below) i WILL get a LockObtainFailedException with the message Lock obtain timed out: SimpleFSLock@<FULL_PATH> from either app or asp.net These thread hinted when spawning threads they get less permissions then i do (but! my main thread has problems as well...) and one solution is to impersonate IIS. I am using visual studios 2010. I am not sure how full blown it is but my attempt to impersonate it failed. So my question is how do i have lucene create the directory and not throw an exception if dont close the writer for some reason (such as power going out)? http://stackoverflow.com/questions/2341163/why-is-my-lucene-index-getting-locked/2499285#2499285 http://stackoverflow.com/questions/1123517/lucene-net-and-i-o-threading-issue/1123981#1123981 static IndexWriter writer = null; static void lucene_init() { bool create = false; string dirname = "LuceneIndex_z"; if (System.IO.Directory.Exists(dirname) == false) create = true; var directory = FSDirectory.GetDirectory(dirname); var analyzer = new StandardAnalyzer(); writer = new IndexWriter(directory, analyzer, create); }

    Read the article

  • Bit reversal of an integer, ignoring integer size and endianness

    - by ??O?????
    Given an integer typedef: typedef unsigned int TYPE; or typedef unsigned long TYPE; I have the following code to reverse the bits of an integer: TYPE max_bit= (TYPE)-1; void reverse_int_setup() { TYPE bits= (TYPE)max_bit; while (bits <<= 1) max_bit= bits; } TYPE reverse_int(TYPE arg) { TYPE bit_setter= 1, bit_tester= max_bit, result= 0; for (result= 0; bit_tester; bit_tester>>= 1, bit_setter<<= 1) if (arg & bit_tester) result|= bit_setter; return result; } One just needs first to run reverse_int_setup(), which stores an integer with the highest bit turned on, then any call to reverse_int(arg) returns arg with its bits reversed (to be used as a key to a binary tree, taken from an increasing counter, but that's more or less irrelevant). Is there a platform-agnostic way to have in compile-time the correct value for max_int after the call to reverse_int_setup(); Otherwise, is there an algorithm you consider better/leaner than the one I have for reverse_int()? Thanks.

    Read the article

  • Spikes in Socket Performance

    - by Harun Prasad
    We are facing random spikes in high throughput transaction processing system using sockets for IPC. Below is the setup used for the run: The client opens and closes new connection for every transaction, and there are 4 exchanges between the server and the client. We have disabled the TIME_WAIT, by setting the socket linger (SO_LINGER) option via getsockopt as we thought that the spikes were caused due to the sockets waiting in TIME_WAIT. There is no processing done for the transaction. Only messages are passed. OS used Centos 5.4 The average round trip time is around 3 milli seconds, but some times the round trip time ranges from 100 milli seconds to couple of seconds. Steps used for Execution and Measurement and output Starting the server $ python sockServerLinger.py /dev/null & Starting the client to post 1 million transactions to the server. And logs the time for a transaction in the client.log file. $ python sockClient.py 1000000 client.log Once the execution finishes the following command will show the execution time greater than 100 milliseconds in the format <line_number>:<execution_time>. $ grep -n "0.[1-9]" client.log | less Below is the example code for Server and Client. Server # File: sockServerLinger.py import socket, traceback,time import struct host = '' port = 9999 l_onoff = 1 l_linger = 0 lingeropt = struct.pack('ii', l_onoff, l_linger) s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, lingeropt) s.bind((host, port)) s.listen(1) while 1: try: clientsock, clientaddr = s.accept() print "Got connection from", clientsock.getpeername() data = clientsock.recv(1024*1024*10) #print "asdasd",data numsent=clientsock.send(data) data1 = clientsock.recv(1024*1024*10) numsent=clientsock.send(data) ret = 1 while(ret>0): data1 = clientsock.recv(1024*1024*10) ret = len(data) clientsock.close() except KeyboardInterrupt: raise except: print traceback.print_exc() continue Client # File: sockClient.py import socket, traceback,sys import time i = 0 while 1: try: st = time.time() s = socket.socket(socket.AF_INET,socket.SOCK_STREAM) while (s.connect_ex(('127.0.0.1',9999)) != 0): continue numsent=s.send("asd"*1000) response = s.recv(6000) numsent=s.send("asd"*1000) response = s.recv(6000) i+=1 if i == int(sys.argv[1]): break except KeyboardInterrupt: raise except: print "in exec:::::::::::::",traceback.print_exc() continue print time.time() -st

    Read the article

  • Looking for ideas for a simple pattern matching algorithm to run on a microcontroller

    - by pic_audio
    I'm working on a project to recognize simple audio patterns. I have two data sets, each made up of between 4 and 32 note/duration pairs. One set is predefined, the other is from an incoming data stream. The length of the two strongly correlated data sets is often different, but roughly the same "shape". My goal is to come up with some sort of ranking as to how well the two data sets correlate/match. I have converted the incoming frequencies to pitch and shifted the incoming data stream's pitch so that it's average pitch matches that of the predefined data set. I also stretch/compress the incoming data set's durations to match the overall duration of the predefined set. Here are two graphical examples of data that should be ranked as strongly correlated: http://s2.postimage.org/FVeG0-ee3c23ecc094a55b15e538c3a0d83dd5.gif (Sorry, as a new user I couldn't directly post images) I'm doing this on a 8-bit microcontroller so resources are minimal. Speed is less an issue, a second or two of processing isn't a deal breaker. It wouldn't surprise me if there is an obvious solution, I've just been staring at the problem too long. Any ideas? Thanks in advance...

    Read the article

  • Finding if a Binary Tree is a Binary Search Tree

    - by dharam
    Today I had an interview where I was asked to write a program which takes a Binary Tree and returns true if it is also a Binary Search Tree otherwise false. My Approach1: Perform an inroder traversal and store the elements in O(n) time. Now scan through the array/list of elements and check if element at ith index is greater than element at (i+1)th index. If such a condition is encountered, return false and break out of the loop. (This takes O(n) time). At the end return true. But this gentleman wanted me to provide an efficient solution. I tried but I was unsuccessfult, because to find if it is a BST I have to check each node. Moreover he was pointing me to think over recusrion. My Approach 2: A BT is a BST if for any node N N-left is < N and N-right N , and the INorder successor of left node of N is less than N and the inorder successor of right node of N is greater than N and the left and right subtrees are BSTs. But this is going to be complicated and running time doesn't seem to be good. Please help if you know any optimal solution. Thanks.

    Read the article

  • How to weed out the bad programmers from the competent ones in the interview process

    - by thaBadDawg
    I am getting ready to add another developer to my team and I want to try and fix the mistakes I made in my last hiring cycle. I like to think of myself as a competent programmer (I can be given a project, I can deliver on that project and the deliverable work with very few if any bugs) and so I ask questions that I would ask myself in an interview. I've come to the conclusion that my interviewing skills are completely lacking because the last two people I've hired interviewed incredibly well but have been less than ideal at the tasks that they've been given. My CTO (who was completely useless in giving any guidance as to how) suggested I improve on my interviewing skills. The question is this - How does one programmer interview another programmer and get an understanding of the other programmer's abilities? Edit: Though slightly different, the answers provided to this question could be of use to you. That question concerns specific interview questions while yours seems to be more general about interview approaches and not just about the questions themselves. Update: Just for the hell of it I asked two of the guys I worked with if they could do FizzBuzz. 45 minutes and 80 minutes to work it out. And these aren't bottom level guys either.

    Read the article

  • Android: Dialog themed activity not visible

    - by Vincent
    I have an activity which, when started, needs to check if the user is authenticated. If not, I need to display an interface to authenticate. I do this with another activity, which has a dialog theme, and I start it in onResume() with flags NO_HISTORY and EXCLUDE_FROM_RECENTS. Everything works fine when starting the application for the first time. But I have a feature that resets login after some time, if the user is not in an activity. When I test this, I start the applicatio, enter the password, then move back to home. Then when I enter the application again, the background darkens as if the dialog would show, but it doesn't. At this point, if I press the back button, the darkening from the background activity disappears for a second, then the dialog finally appears. I used logcat to investigate the case, and the activity lifecycle functions get called properly: //For the first start: onCreate background activity onStart background activity onResume background activity onPause background activity onCreate dialog onStart dialog onResume dialog //Enter password onPause dialog onResume background activity onStop dialog onDestroy dialog //navigating to homescreen onPause background activity onStop background activity //starting again onRestart background activity onStart background activity onResume background activity onPause background activity onCreate dialog onStart dialog onResume dialog //no dialog shown, only darkened background activity recieving no input //pressing back button onPause dialog onResume background activity onPause background activity onCreate NEW dialog onStart NEW dialog onResume NEW dialog onStop OLD dialog onDestroy OLD dialog //now the dialog is properly shown //entering password onPause NEW dialog onResume background activity onStop NEW dialog onDestroy NEW dialog Using the SINGLE_TOP flag makes no change. However, if I remove the dialog theme from the dialog activity, it IS shown after the restart. So far I didn't want to use a Dialog instead of an Activity, because I consider them problematic sometimes and less encapsulated and this part has to be quite secure. You may be able to convince me though.. Thank you in advance!

    Read the article

  • Extracting a specific value from command-line output using powershell

    - by Andrew Shepherd
    I've just started learning PowerShell this morning, and I'm grappling with the basics. Here's the problem: I want to query subversion for the revision number of a repository, and then create a new directory with this revision number. The command svn_info outputs a number of label\value pairs, delimited by colons. For example Path: c# URL: file:///%5Copdy-doo/Archive%20(H)/Development/CodeRepositories/datmedia/Development/c%23 Repository UUID: b1d03fca-1949-a747-a1a0-046c2429f46a Revision: 58 Last Changed Rev: 58 Last Changed Date: 2011-01-12 11:36:12 +1000 (Wed, 12 Jan 2011) Here is my current code which solves the problem. Is there a way to do this with less code? I'm thinking that with the use of piping you could do this with one or two lines. I certainly shouldn't have to use a temporary file. $RepositoryRoot = "file:///%5Cdat-ftp/Archive%20(H)/Development/CodeRepositories/datmedia" $BuildBase="/Development/c%23" $RepositoryPath=$RepositoryRoot + $BuildBase # Outputing the info into a file svn info $RepositoryPath | Out-File -FilePath svn_info.txt $regex = [regex] '^Revision: (\d{1,4})' foreach($info_line in get-content "svn_info.txt") { $match = $regex.Match($info_line); if($match.Success) { $revisionNumber = $match.Groups[1]; break; } } "Revision number = " + $revisionNumber;

    Read the article

  • Image Line Trace Math Help Hard To Explain

    - by Ozzy
    Hi all, sorry for the confusing title, its really hard for me to explain what i want. So i created this image :) Ok so the two RED dots are points on an image. The distance between them isnt important. What I want to do is, Using the coordinates for the two dots, work out the angle of the space between them (as shown by the black line between the red dots) Then once the angle is found, on the last red dot, create two points which cross the angle of the first line. Then from that, scan a Half semicircle and get the coordinates of every pixel of the image that the orange line passes. I dnot know if this makes any sense to you lot so i drew another picture: As you can see in the second picture, my idea is applied to a line drawn on a black canavs. The two red dots are the starting coordinates then at the end of the two dots, a less then half semicircle is created. The part that is orange shows the pixels of the image that should be recorded. I have no clue how to start this, so if anyone has any ideas on how i can or on what i need to do, any help is much appreciated :)

    Read the article

  • Sending data through POST request from a node.js server to a node.js server

    - by Masiar
    I'm trying to send data through a POST request from a node.js server to another node.js server. What I do in the "client" node.js is the following: var options = { host: 'my.url', port: 80, path: '/login', method: 'POST' }; var req = http.request(options, function(res){ console.log('status: ' + res.statusCode); console.log('headers: ' + JSON.stringify(res.headers)); res.setEncoding('utf8'); res.on('data', function(chunk){ console.log("body: " + chunk); }); }); req.on('error', function(e) { console.log('problem with request: ' + e.message); }); // write data to request body req.write('data\n'); req.write('data\n'); req.end(); This chunk is taken more or less from the node.js website so it should be correct. The only thing I don't see is how to include username and password in the options variable to actually login. This is how I deal with the data in the server node.js (I use express): app.post('/login', function(req, res){ var user = {}; user.username = req.body.username; user.password = req.body.password; ... }); How can I add those username and password fields to the options variable to have it logged in? Thanks

    Read the article

  • Explicit behavior with checks vs. implicit behavior

    - by Silviu
    I'm not sure how to construct the question but I'm interested to know what do you guys think of the following situations and which one would you prefer. We're working at a client-server application with winforms. And we have a control that has some fields automatically calculated upon filling another field. So we're having a field currency which when filled by the user would determine an automatic filling of another field, maybe more fields. When the user fills the currency field, a Currency object would be retrieved from a cache based on the string introduced by the user. If entered currency is not found in the cache a null reference is returned by the cache object. Further down when asking the application layer to compute the other fields based on the currency, given a null currency a null specific field would be returned. This way the default, implicit behavior is to clear all fields. Which is the expected behavior. What i would call the explicit implementation would be to verify that the Currency object is null in which case the depending fields are cleared explicitly. I think that the latter version is more clear, less error prone and more testable. But it implies a form of redundancy. The former version is not as clear and it implies a certain behavior from the application layer which is not expressed in the tests. Maybe in the lower layer tests but when the need arises to modify the lower layers, so that given a null currency something else should be returned, i don't think a test that says just that without a motivation is going to be an impediment for introducing a bug in upper layers. What do you guys think?

    Read the article

  • Capture keystrokes (e.g., function keys) while a messagebox is up

    - by FastAl
    We have a large WinForms app, and there is a built-in bug reporting system that can be activated during testing via the F5 Key. I am capturing the F5 key with .Net's PreFilterMessage system. This works fine on the main forms, modal dialog boxes, etc. Unfortunately, the program also displays windows messageboxes when it needs to. When there is a bug with that, e.g., wrong text in the messagebox or it shouldn't be there, the messagefilter isn't executed at all when the messagebox is up! I realize I could fix it by either rewriting my own messagebox routine, or kicking off a separate thread that polls GetAsyncKeyState and calls the error reporter from there. However I was hoping for a method that was less of a hack. Here's code that manifests the problem: Public Class Form1 Implements IMessageFilter Private Sub Form1_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Click MsgBox("now, a messagebox is up!") End Sub Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load Application.AddMessageFilter(Me) End Sub Public Function PreFilterMessage(ByRef m As System.Windows.Forms.Message) _ As Boolean Implements IMessageFilter.PreFilterMessage Const VK_F5 As Int32 = &H74 Const WM_KEYDOWN As Integer = &H100 If m.Msg = WM_KEYDOWN And m.WParam.ToInt32 = VK_F5 Then ' In reality code here takes a screenshot, saves the program state, and shows a bug report interface ' IO.File.AppendAllText("c:\bugs.txt", InputBox("Describe the bug:")) End If End Function End Class Many thanks.

    Read the article

  • Why is Raphael.JS creating paper with dimensions 1000x1000?

    - by Bryan
    I have a demo using raphael.js. The code for it is very simple but when viewed in Internet Explorer (less that version 9) I get a Raphael canvas that is 1000px by 1000px and I can't figure out why. I'm using version 1.5.2 of Raphael. Code below: HTML <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title>Raphael Test</title> <link rel="stylesheet" type="text/css" href="test.css"> <link href="../shared/img/favicon.png" rel="shortcut icon"> </head> <body> <div id="graph"></div> <script src="../shared/js/raphael/raphael-min.js" type="text/javascript"> </script> <script src="test.js" type="text/javascript"> </script> </body> </html> CSS /* Graph */ #graph { padding: 5px; width: 477px; height: 299; } JS var holder = document.getElementById('graph') , width = holder.scrollWidth , height = Math.round(width * 0.5625) + 25 , p = Raphael(10, 50, width, height) , c = p.circle(p.width - 50, p.height - 50, 50); alert(p.width + ' & ' + p.height); I found a discussion in Raphael's Google group with the same problem but no resolution.

    Read the article

  • Why does ICEfaces send dispose-window request on page unload when using view-scoped bean?

    - by woflrevo
    in our application ICEfaces always sends a dispose-window request just before navigating to another JSF Page. as much as i understand this should not happen when having org.icefaces.lazyWindowScope set to true and there is no window-scoped bean involved in current request. but it happens on each link and makes our UI less responsive. but we don't have any window-scoped bean in our application. is that a bug in icefaces that the dispose request is sent when using view-scoped beans? Is it possible to disable? ViewScope is defined in JSF not in ICEfaces, it should work without this dispose request i guess... @ManagedBean(name="viewScopeBean") @ViewScoped public class ViewScopeBean { public void doSomething(){ // } } And here the example jsf: <ice:form> <ice:commandButton value="doSomething" action="#{viewScopeBean.doSomething}"/> <h:link outcome="index" value="Link to same page"/> </ice:form> To reproduce do the following using the code above: open firebug's net tab and activate persist option click doSomething-Button click "link to same page" = dispose-window will be send before navigation Dispose Request Parameters: ice.submit.type=ice.dispose.window ice.window=4guthcbue javax.faces.ViewState=-8138151632882151449%3A-6709064564386098402 Environment: ICEfaces-EE 2.0.0.GA ICEpush-EE 2.0.0.GA Mojarra 2.1.1 JRockit 1.6.0_22 WebLogic Server 10.3.4.0 ICEfaces Configuration: org.icefaces.render.auto: true [default] org.icefaces.autoid: true [default] org.icefaces.aria.enabled: true [default] org.icefaces.blockUIOnSubmit: false [default] org.icefaces.compressDOM: false [default] org.icefaces.compressResources: true [default] org.icefaces.connectionLostRedirectURI: /pages/main.jsf org.icefaces.deltaSubmit: false [default] org.icefaces.lazyPush: true [default] org.icefaces.sessionExpiredRedirectURI: /pages/main.jsf org.icefaces.standardFormSerialization: false [default] org.icefaces.strictSessionTimeout: false [default] org.icefaces.windowScopeExpiration = 1000 [default] org.icefaces.mandatoryResourceConfiguration: null [default] org.icefaces.uniqueResourceURLs: true [default] org.icefaces.lazyWindowScope: true [default] org.icefaces.disableDefaultErrorPopups: false [default]

    Read the article

  • Crosstab/Cube/Pivot Components for Delphi

    - by Anagoge
    I'm looking for a Delphi VCL crosstab/cube/pivotcube/olap grid component for Delphi 2009, 2010, or XE. I'm willing to sacrifice advanced features to get something open/free (or very cheap if I must) to make it easier to collaborate with any future developers without anyone having to purchase more components than I already use, since this will just be used in one screen. If there isn't anything appropriate out there, I may try to implement something simple on my own. I can live with some fairly basic features: drag and drop to configure dimensions, sort by a column, allow totals/min/max for a column, and (optionally) expand/collapse or drill down to sub-categories. Blazing performance and enterprise scalability are not required, since there should be less than 2000 source rows. There appear to be several decent options in the commercial space (ExpressPivotCube, FastCube, HierCube), but they are all a few hundred dollars. This project already uses existing installations of Excel 2007 and SQL Server 2005/2008, so I might consider leveraging those, though I'd prefer a native Delphi component, if possible. There are also the very old Decision Cube components included in Delphi's Source\xtab directory, but they apparently no longer support unicode compilers (Delphi 2009+), since I got dozens of unicode-related compilation errors while test compiling that source in Delphi XE. Those components also still link to the long-deprecated BDE! Has anyone modified Decision Cube to support unicode/pure-TDataSet? The online tutorials I found were incomplete and silent on the dozens of BDE/unicode compilation errors I see, so I might have to tackle that on my own. Does anyone have suggestions where to start for a free/cheap basic crosstab/pivot grid component?

    Read the article

  • ServerIdentity memory leak with IHttpAsyncHandler

    - by Anton
    I have a .NET web application that consists of a single HTTP handler class that implements IHttpAsyncHandler. All requests to this handler are handled asynchronously, though some requests are short-lived and some are long-lived (nothing over a few seconds). The problem is that memory consumption grows over time as requests are handled. All profiling results point to an unbounded growth of String objects held by instances of System.Runtime.Remoting.ServerIdentity. Every String value is different, but they all look similar to: /dd41c00e_1566_4702_b660_c81cdea18a43/vigefresi5pfv8n0ekddg57z_1154.rem There is nothing in my application that uses ServerIdentity directly, and unless I am mistaken, the ServerIdentity instances are proportional to the number of incoming requests. If this is an internal .NET structure, it looks like the CLR is not cleaning up after itself. What could be causing the leak? UPDATE A little less than half of the String objects are being held by System.Runtime.Remoting. The remaining String objects are being held by System.Runtime.Serialization and look similar to: +1sgess5rjcrgbmp3kqr6bmv_3474.rem Also, the problem only seems to occur when lots of simultaneous HTTP web requests arrive.

    Read the article

  • C# File IO with Streams - Best Memory Buffer Size

    - by AJ
    Hi, I am writing a small IO library to assist with a larger (hobby) project. A part of this library performs various functions on a file, which is read / written via the FileStream object. On each StreamReader.Read(...) pass, I fire off an event which will be used in the main app to display progress information. The processing that goes on in the loop is vaired, but is not too time consuming (it could just be a simple file copy, for example, or may involve encryption...). My main question is: What is the best memory buffer size to use? Thinking about physical disk layouts, I could pick 2k, which would cover a CD sector size and is a nice multiple of a 512 byte hard disk sector. Higher up the abstraction tree, you could go for a larger buffer which could read an entire FAT cluster at a time. I realise with today's PC's, I could go for a more memory hungry option (a couple of MiB, for example), but then I increase the time between UI updates and the user perceives a less responsive app. As an aside, I'm eventually hoping to provide a similar interface to files hosted on FTP / HTTP servers (over a local network / fastish DSL). What would be the best memory buffer size for those (again, a "best-case" tradeoff between perceived responsiveness vs. performance). Thanks in advance for any ideas, Adam

    Read the article

  • JavaScript keeps returning ambigious error (in ASP.NET MVC 2.0)

    - by Erx_VB.NExT.Coder
    this is my function (with other lines ive tried/abandoned)... function DoClicked(eNumber) { //obj.style = 'bgcolor: maroon'; var eid = 'cat' + eNumber; //$get(obj).style.backgroundColor = 'maroon'; //var nObj = $get(obj); var nObj = document.getElementById(eid) //alert(nObj.getAttribute("style")); nObj.style.backgroundColor = 'Maroon'; alert(nObj.style.backgroundColor); //nObj.setAttribute("style", "backgroundcolor: Maroon"); }; This error keeps getting returned even after the last line in the function runs: Microsoft JScript runtime error: Sys.ArgumentUndefinedException: Value cannot be undefined. Parameter name: method this function is called with an "OnSuccess" set in my Ajax.ActionLink call (ASP.NET MVC)... anyone any ideas on this? i have these referenced... even when i remove the 'debug' versions for normal versions, i still get an error but the error just has much less information and says 'b' is undefined (probably a ms js library internal variable)... <script src="../../Scripts/MicrosoftAjax.debug.js" type="text/javascript"></script> <script src="../../Scripts/MicrosoftMvcValidation.debug.js" type="text/javascript"></script> <script src="../../Scripts/MicrosoftMvcAjax.debug.js" type="text/javascript"></script> <script src="../../Scripts/jquery-1.4.1.js" type="text/javascript"></script> also, this is how i am calling the actionlink method: Ajax.ActionLink(item.CategoryName, "SubCategoryList", "Home", New With {.CategoryID = item.CategoryID}, New AjaxOptions With {.UpdateTargetId = "SubCat", .HttpMethod = "Post", .OnSuccess = "DoClicked(" & item.CategoryID.ToString & ")"}, New With {.id = "cat" & item.CategoryID.ToString})

    Read the article

  • Viewing a large-resolution VNC server through a small-resolution viewer in Ubuntu

    - by Madiyaan Damha
    I have two Ubuntu computers, one with a large screen resolution (1920x1600) that is running default ubuntu vnc server. I have another computer that has a resolution of about 1200x1024 that I use to vnc into the server (I use the default ubuntu vnc viewer). Now everything works fine except there are annoying scrollbars in the viewer because the server's desktop resolution is so much higher than the viewer's. Is there a way to: 1) Scale the server's desktop down to the viewer's resolution. I know there will be a loss of image quality, but I am willing to try it out. This should be something like how windows media player or vlc scales down the window (and does some interpolation of pixels). 2) Automatically shrink the resolution of the server to the client's when I connect and scale the resolution back when I disconnect. This seems like a less attractive solution. 3) Any other solution that gurus out there use? I am sure someone has experienced this before (annoying scroll bars) so there must be a solution out there. Thanks,

    Read the article

  • Strange UIKit bug, table view row stays selected

    - by Can Berk Güder
    I'm facing what appears to be a UIKit bug, and it takes the combination of two less commonly used features to reproduce it, so please bear with me here. I have quite the common view hierarchy: UITabBarController -> UINavigationController -> UITableViewController and the table view controller pushes another table view controller onto the navigation controller's stack when a row is selected. There's absolutely nothing special or fancy in the code here. However, the second UITableViewController, the "detail view controller" if you will, does two things: It sets hidesBottomBarWhenPushed to YES in its init method, so the tab bar is hidden when this controller is pushed: - (id)initWithStyle:(UITableViewStyle)style { if(self = [super initWithStyle:style]) { self.hidesBottomBarWhenPushed = YES; } return self; } It calls setToolbarHidden:NO animated:YES and setToolbarHidden:YES animated:YES on self.navigationController in viewWillAppear: and viewWillDisappear: respectively, causing the UIToolbar provided by UINavigationController to be displayed and hidden with animations: - (void)viewWillAppear:(BOOL)animated { [super viewWillAppear:animated]; [self.navigationController setToolbarHidden:NO animated:YES]; } - (void)viewWillDisappear:(BOOL)animated { [super viewWillDisappear:animated]; [self.navigationController setToolbarHidden:YES animated:YES]; } Now, if the second UITableViewController was pushed by selecting the row at the bottom of the screen (it doesn't have to be the last row) in the first controller, this row does not automatically get deselected when the user immediately or eventually returns to the first controller. Further, the row cannot be deselected by calling deselectRowAtIndexPath:animated: on self.tableView in viewWillAppear: or viewDidAppear: in the first controller. I'm guessing this is a bug in UITableViewController's drawing code which of course only draws visible rows, but unfortunately fails to determine correctly if the bottommost row will be visible in this case. I failed to find anything on this on Google or OpenRadar, and was wondering if anyone else on SO had this problem or knew a solution/workaround.

    Read the article

  • no statement parsed and wrong number or types of arguments - cfstoredproc

    - by Travis
    I have an Oracle procedure - editBacklog which I'm calling from a CFM page via cfstoredproc. After several changes to the procedure I started getting ORA-06550: line 1, column 7: PLS-00306: wrong number or types of arguments in call to 'EDITBACKLOG'. I've gotten this before and found that if I changed the name of the procedure it starts working again. I changed the name to editBacklog2 and it worked as I expected it to. I changed the name back to editBacklog and got the same error. I changed the name back to editBacklog2 again and started getting ORA-01003: no statement parsed. NOTHING has changed at this point except for the names. I changed the name yet again to editBacklog3 and it works as expected. As of right now editBacklog = ORA-06550 editBacklog2 = ORA-01003 editBacklog3 = works (kinda) This whole thing started when I was trying to fix an ORA-01821: date format not recognized error. I fear when I start changing things I'll start getting the same lame behavior described above. Either Oracle or CF is messing with me and I'll end up liking one of them less because of it. I assume it's probably cfstoredproc caching metadata or something but neither google, livedocs, or OTN have much to say about my situation. I'm not the SA or DBA. Anyone have any ideas?

    Read the article

  • Long running stats process - thoughts on language choice?

    - by Josh
    I am on a LAMP stack for a website I am managing. There is a need to roll up usage statistics (a variety of things related to our desktop product), and I initially tackled the problem with PHP (being that I had a bunch of classes to work with the data already). All worked well on my dev box which was using 5.3 Long story short, 5.1 memory management seems to suck a lot worse, and I've had to do a lot of fooling to get the long term roll up scripts to run in a fixed memory space. Our server guys are unwilling to upgrade php at this time. I've since moved my dev server back to 5.1 so I don't run into this problem again... For mining of mysql databases to roll up statistics for different periods and resolutions, potentially running a process that does this all the time in the future (as opposed to on a cron schedule), what language choice do you recommend? I was looking at python (I know it more or less), java (don't know it that well), sticking it out with php (know it quite well). Thanks for any suggestions. Josh

    Read the article

  • Consume webservice from a .NET DLL - app.config problem

    - by Asaf R
    Hi, I'm building a DLL, let's call it mydll.dll, and in it I sometimes need to call methods from webservice, myservice. mydll.dll is built using C# and .NET 3.5. To consume myservice from mydll I've Added A Service in Visual Studio 2008, which is more or less the same as using svcutil.exe. Doing so creates a class I can create, and adds endpoint and bindings configurations to mydll app.config. The problem here is that mydll app.config is never loaded. Instead, what's loaded is the app.config or web.config of the program I use mydll in. I expect mydll to evolve, which is why I've decoupled it's funcionality from the rest of my system to begin with. During that evolution it will likely add more webservice to which it'll call, ruling out manual copy-paste ways to overcome this problem. I've looked at several possible approaches to attacking this issue: Manually copy endpoints and bindings from mydell app.config to target EXE or web .config file. Couples the modules, not flexible Include endpoints and bindings from mydll app.config in target .config, using configSource (see here). Also add coupling between modules Programmatically load mydll app.config, read endpoints and bindings, and instantiate Binding and EndpointAddress. Use a different tool to create local frontend for myservice I'm not sure which way to go. Option 3 sounds promising, but as it turns out it's a lot of work and will probably introduce several bugs, so it doubtfully pays off. I'm also not familiar with any tool other than the canonical svcutil.exe. Please either give pros and cons for the above alternative, provide tips for implementing any of them, or suggest other approaches. Thanks, Asaf

    Read the article

  • Attempting to find a formula for tessellating rectangles onto a board, where middle square can't be

    - by timemirror
    I'm working on a spatial stacking problem... at the moment I'm trying to solve in 2D but will eventually have to make this work in 3D. I divide up space into n x n squares around a central block, therefore n is always odd... and I'm trying to find the number of locations that a rectangle of any dimension less than n x n (eg 1x1, 1x2, 2x2 etc) can be placed, where the middle square is not available. So far I've got this.. total number of rectangles = ((n^2 + n)^2 ) / 4 ..also the total number of squares = (n (n+1) (2n+1)) / 6 However I'm stuck in working out a formula to find how many of those locations are impossible as the middle square would be occupied. So for example: [] [] [] [] [x] [] [] [] [] 3 x 3 board... with 8 possible locations for storing stuff as mid square is in use. I can use 1x1 shapes, 1x2 shapes, 2x1, 3x1, etc... Formula gives me the number of rectangles as: (9+3)^2 / 4 = 144/4 = 36 stacking locations However as the middle square is unoccupiable these can not all be realized. By hand I can see that these are impossible options: 1x1 shapes = 1 impossible (mid square) 2x1 shapes = 4 impossible (anything which uses mid square) 3x1 = 2 impossible 2x2 = 4 impossible etc Total impossible combinations = 16 Therefore the solution I'm after is 36-16 = 20 possible rectangular stacking locations on a 3x3 board. I've coded this in C# to solve it through trial and error, but I'm really after a formula as I want to solve for massive values of n, and also to eventually make this 3D. Can anyone point me to any formulas for these kind of spatial / tessellation problem? Also any idea on how to take the total rectangle formula into 3D very welcome! Thanks!

    Read the article

  • Can't get any speedup from parallelizing Quicksort using Pthreads

    - by Murat Ayfer
    I'm using Pthreads to create a new tread for each partition after the list is split into the right and left halves (less than and greater than the pivot). I do this recursively until I reach the maximum number of allowed threads. When I use printfs to follow what goes on in the program, I clearly see that each thread is doing its delegated work in parallel. However using a single process is always the fastest. As soon as I try to use more threads, the time it takes to finish almost doubles, and keeps increasing with number of threads. I am allowed to use up to 16 processors on the server I am running it on. The algorithm goes like this: Split array into right and left by comparing the elements to the pivot. Start a new thread for the right and left, and wait until the threads join back. If there are more available threads, they can create more recursively. Each thread waits for its children to join. Everything makes sense to me, and sorting works perfectly well, but more threads makes it slow down immensely. I tried setting a minimum number of elements per partition for a thread to be started (e.g. 50000). I tried an approach where when a thread is done, it allows another thread to be started, which leads to hundreds of threads starting and finishing throughout. I think the overhead was way too much. So I got rid of that, and if a thread was done executing, no new thread was created. I got a little more speedup but still a lot slower than a single process. The code I used is below. http://pastebin.com/UaGsjcq2 Does anybody have any clue as to what I could be doing wrong?

    Read the article

< Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >