Search Results

Search found 15535 results on 622 pages for 'mat keep'.

Page 570/622 | < Previous Page | 566 567 568 569 570 571 572 573 574 575 576 577  | Next Page >

  • Learning to create beautiful /next-generation GUI

    - by ShaChris23
    I really want to create a stunning-looking GUI desktop application that looks like, for example: Mac OS X interface Picasa desktop client on windows IPhone apps Office 2007 I've mostly been programming GUI using Qt/Swing/WinForm and I'm tired of creating so plain looking GUI, lol. So I was thinking about diving into stuff like: jQuery WPF/C# iPhone SDK Silverlight Adobe Air/Flex Just to get some ideas on how to create really cool looking UI. Does that sound like a good list? Any developers here that could share what platform they use to create very cool looking desktop app? On a sidenote, I really wonder what developers at Apple / Microsoft use to develop their own cool-looking software. EDIT A lot of responses talk about the importance of usability over "cool-looking".. I totally agree that usability and simplicity are the most important aspects of user interface design. I've been doing GUI development for a while now ( 3 years), so that I understand. But using cool-looking UI also improves user experience + it could make big difference on whether or not your software sells. I mean, otherwise why would Microsoft/Apple try to make their OS UI look "cooler" everytime there's a new version? Why would websites like pragprog.com, or versionsapp.com. make their websites look like that? Basically you kill 2 birds with one stone: stunnning-looking UI + super usability (because it looks simple and intuitive). That is what I'm striving for. And as far as I know, I cannot achieve that using Qt/Winform. Most of the books I have read just show you how to make average-looking (read: 1990's) UI. I want to learn how to create cool-looking UI. And the only place I see cool-looking UIs these days are the technology I list above. I'm not enamored with any technology; but I just want to know how things are done in other technology to see if I could apply them to the technology I'm using, or see if I could use those technology in my line of work. An example: if I were to pick between this UI and this UI, I probably would pick the latter, if just based on looks alone. Functionally, they are just the same UI; they both allow you to keep track of your time. They both contain buttons and textboxes, etc. But the fact that they look different, also differentiate them in terms of attractiveness. So in all, I think the "ice on the cake" is very important. I would say it's the thing you strive for after you are certain you have a totally intuitive, usable UI.

    Read the article

  • Binary Search Help

    - by aloh
    Hi, for a project I need to implement a binary search. This binary search allows duplicates. I have to get all the index values that match my target. I've thought about doing it this way if a duplicate is found to be in the middle: Target = G Say there is this following sorted array: B, D, E, F, G, G, G, G, G, G, Q, R S, S, Z I get the mid which is 7. Since there are target matches on both sides, and I need all the target matches, I thought a good way to get all would be to check mid + 1 if it is the same value. If it is, keep moving mid to the right until it isn't. So, it would turn out like this: B, D, E, F, G, G, G, G, G, G (MID), Q, R S, S, Z Then I would count from 0 to mid to count up the target matches and store their indexes into an array and return it. That was how I was thinking of doing it if the mid was a match and the duplicate happened to be in the mid the first time and on both sides of the array. Now, what if it isn't a match the first time? For example: B, D, E, F, G, G, J, K, L, O, Q, R, S, S, Z Then as normal, it would grab the mid, then call binary search from first to mid-1. B, D, E, F, G, G, J Since G is greater than F, call binary search from mid+1 to last. G, G, J. The mid is a match. Since it is a match, search from mid+1 to last through a for loop and count up the number of matches and store the match indexes into an array and return. Is this a good way for the binary search to grab all duplicates? Please let me know if you see problems in my algorithm and hints/suggestions if any. The only problem I see is that if all the matches were my target, I would basically be searching the whole array but then again, if that were the case I still would need to get all the duplicates. Thank you BTW, my instructor said we cannot use Vectors, Hash or anything else. He wants us to stay on the array level and get used to using them and manipulating them.

    Read the article

  • SQL Server 2008 Filestream Impersonation Access Denied error

    - by Adi
    I've been trying to upload a file to the database using SQL SERVER 2008 Filestream and Impersonation technique to save the file in the file system, but i keep getting Access Denied error; even though i've set the permissions for the impersonating user to the Filestream folder(C:\SQLFILESTREAM\Dev_DB). when i debugged the code, i found the server return a unc path(\Server_Name\MSSQLSERVER\v1\Dev_LMDB\dbo\FileData\File_Data\13C39AB1-8B91-4F5A-81A1-940B58504C17), which was not accessible through windows explorer. I've my web application hosted on local maching(Windows 7). SQL Server is located on a remote server(Windows Server 2008 R2). Sql authentication was used to call the stored procedure. Following is the code i've used to do the above operations. SqlCommand sqlCmd = new SqlCommand("AddFile"); sqlCmd.CommandType = CommandType.StoredProcedure; sqlCmd.Parameters.Add("@File_Name", SqlDbType.VarChar, 512).Value = filename; sqlCmd.Parameters.Add("@File_Type", SqlDbType.VarChar, 5).Value = Path.GetExtension(filename); sqlCmd.Parameters.Add("@Username", SqlDbType.VarChar, 20).Value = username; sqlCmd.Parameters.Add("@Output_File_Path", SqlDbType.VarChar, -1).Direction = ParameterDirection.Output; DAManager PDAM = new DAManager(DAManager.getConnectionString()); using (SqlConnection connection = (SqlConnection)PDAM.CreateConnection()) { connection.Open(); SqlTransaction transaction = connection.BeginTransaction(); WindowsImpersonationContext wImpersonationCtx; NetworkSecurity ns = null; try { PDAM.ExecuteNonQuery(sqlCmd, transaction); string filepath = sqlCmd.Parameters["@Output_File_Path"].Value.ToString(); sqlCmd = new SqlCommand("SELECT GET_FILESTREAM_TRANSACTION_CONTEXT()"); sqlCmd.CommandType = CommandType.Text; byte[] Context = (byte[])PDAM.ExecuteScalar(sqlCmd, transaction); byte[] buffer = new byte[4096]; int bytedRead; ns = new NetworkSecurity(); wImpersonationCtx = ns.ImpersonateUser(IMP_Domain, IMP_Username, IMP_Password, LogonType.LOGON32_LOGON_INTERACTIVE, LogonProvider.LOGON32_PROVIDER_DEFAULT); SqlFileStream sfs = new SqlFileStream(filepath, Context, System.IO.FileAccess.Write); while ((bytedRead = inFS.Read(buffer, 0, buffer.Length)) != 0) { sfs.Write(buffer, 0, bytedRead); } sfs.Close(); transaction.Commit(); } catch (Exception ex) { transaction.Rollback(); } finally { sqlCmd.Dispose(); connection.Close(); connection.Dispose(); ns.undoImpersonation(); wImpersonationCtx = null; ns = null; } } Can someone help me with this issue. Reference Exception: Type : System.ComponentModel.Win32Exception, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Message : Access is denied Source : System.Data Help link : NativeErrorCode : 5 ErrorCode : -2147467259 Data : System.Collections.ListDictionaryInternal TargetSite : Void OpenSqlFileStream(System.String, Byte[], System.IO.FileAccess, System.IO.FileOptions, Int64) Stack Trace : at System.Data.SqlTypes.SqlFileStream.OpenSqlFileStream(String path, Byte[] transactionContext, FileAccess access, FileOptions options, Int64 allocationSize) at System.Data.SqlTypes.SqlFileStream..ctor(String path, Byte[] transactionContext, FileAccess access, FileOptions options, Int64 allocationSize) at System.Data.SqlTypes.SqlFileStream..ctor(String path, Byte[] transactionContext, FileAccess access) Thanks

    Read the article

  • XSL - Unknown Error in FF media:content/@url

    - by danit
    I keep getting "Unknow Error occurred" when i try this in my XSLT: <table class="TEDtalks"> <xsl:for-each select="/rss/channel/item"> <tr> <td><xsl:value-of select="title"/></td> <td> <xsl:value-of select="media:content/@url" /> </td> </tr> </xsl:for-each> </table> The XML <rss> <channel> <item> <title>TEDTalks : Karen Armstrong: Let's revive the Golden Rule - Karen Armstrong (2009)</title> <itunes:author>Karen Armstrong</itunes:author> <description>Weeks from the Charter for Compassion launch, Karen Armstrong looks at religion's role in the 21st century: Will its dogmas divide us? Or will it unite us for common good? She reviews the catalysts that can drive the world's faiths to rediscover the Golden Rule.&lt;img src="http://feeds.feedburner.com/~r/TEDTalks_video/~4/th6FBgvV22o" height="1" width="1"/&gt;</description> <itunes:subtitle>Karen Armstrong: Let's revive the Golden Rule</itunes:subtitle> <itunes:summary><![CDATA[Weeks from the Charter for Compassion launch, Karen Armstrong looks at religion's role in the 21st century: Will its dogmas divide us? Or will it unite us for common good? She reviews the catalysts that can drive the world's faiths to rediscover the Golden Rule.]]></itunes:summary> <link>http://feedproxy.google.com/~r/TEDTalks_video/~3/th6FBgvV22o/647</link> <guid isPermaLink="false">http://video.ted.com/talks/podcast/KarenArmstrong_2009G.mp4</guid> <pubDate>Tue, 29 Sep 2009 12:46:00 -0500</pubDate> <category>Higher Education</category> <itunes:explicit>no</itunes:explicit> <itunes:duration>00:09:54</itunes:duration> <itunes:keywords>TED</itunes:keywords> <media:content url="http://feedproxy.google.com/~r/TEDTalks_video/~5/XT8k_DqlzGc/KarenArmstrong_2009G.mp4" fileSize="33726021" type="video/mp4" />

    Read the article

  • Subroutine & GoTo design

    - by sub
    I have a strange question concerning subroutines: As I'm creating a minimal language and I don't want to add high-level loops like while or for I was planning on just adding gotos to keep it Turing-Complete. Now I thought, eww - gotos - I wouldn't want to program in that language if I had to use gotos so often. So I thought about adding subroutines instead. I see the difference as the following: gotos Go to (captain obvious) a previously defined point and continue executing the program from there. Leads to hardly understandable and buggy code, I think that's a fact. subroutines Similiar: You define their starting point somewhere, as you call them the program jumps there - but the subroutine can go back to the point it was called from with return. Okay. Why didn't I just add the more function-like, nice looking subroutines? Because: In order to make return work if I call subroutines from within subroutines from within other subroutines, I'd have to use a stack containing the point where the currently running subroutine came from at top. That would then mean that I would, if I create loops using the subroutines, end up with an extremely memory-eating, overflowing stack with return locations. Not good. Don't think of my subroutines as functions. They are just gotos that return to the point they were called from, they don't actually give back values like the return x; statement in nearly all today's languages. Now to my actual questions: How can I solve the above problem with the stack overflow on loops with subroutines? Do I have to add a separate goto language construct without the return option? Assembler doesn't have loops but as I have seen myJumpPoint:, jnz, jz, retn. That means to me that there must also be a stack containing all the return locations. Am I right with that? What about long running loops then? Don't they overflow the stack/eat memory then? Am I getting the retn symbol in assembler totally wrong? If yes, please explain it to me.

    Read the article

  • Sending multiline message via sockets without closing the connection

    - by Yasir Arsanukaev
    Hello folks. Currently I have this code of my client-side Haskell application: import Network.Socket import Network.BSD import System.IO hiding (hPutStr, hPutStrLn, hGetLine, hGetContents) import System.IO.UTF8 connectserver :: HostName -- ^ Remote hostname, or localhost -> String -- ^ Port number or name -> IO Handle connectserver hostname port = withSocketsDo $ do -- withSocketsDo is required on Windows -- Look up the hostname and port. Either raises an exception -- or returns a nonempty list. First element in that list -- is supposed to be the best option. addrinfos <- getAddrInfo Nothing (Just hostname) (Just port) let serveraddr = head addrinfos -- Establish a socket for communication sock <- socket (addrFamily serveraddr) Stream defaultProtocol -- Mark the socket for keep-alive handling since it may be idle -- for long periods of time setSocketOption sock KeepAlive 1 -- Connect to server connect sock (addrAddress serveraddr) -- Make a Handle out of it for convenience h <- socketToHandle sock ReadWriteMode -- Were going to set buffering to LineBuffering and then -- explicitly call hFlush after each message, below, so that -- messages get logged immediately hSetBuffering h LineBuffering return h sendid :: Handle -> String -> IO String sendid h id = do hPutStr h id -- Make sure that we send data immediately hFlush h -- Retrieve results hGetLine h The code portions in connectserver are from this chapter of Real World Haskell book where they say: When dealing with TCP data, it's often convenient to convert a socket into a Haskell Handle. We do so here, and explicitly set the buffering – an important point for TCP communication. Next, we set up lazy reading from the socket's Handle. For each incoming line, we pass it to handle. After there is no more data – because the remote end has closed the socket – we output a message about that. Since hGetContents blocks until the server closes the socket on the other side, I used hGetLine instead. It satisfied me before I decided to implement multiline output to client. I wouldn't like the server to close a socket every time it finishes sending multiline text. The only simple idea I have at the moment is to count the number of linefeeds and stop reading lines after two subsequent linefeeds. Do you have any better suggestions? Thanks.

    Read the article

  • Google Bookmarks Accelerator for IE8

    - by MACHiNESMiTH
    To Editors: Sorry, I didnt realise the question thing till you pointed it out. Also I didn't know about the appengine tag, I'm assuming it was selected by mistake from that JS autosuggestion box (it usually happens with me, I run a P3) my apologies for that too. I've edited it now, if it doesnt work let me know. Hi all, I'm making (or at least trying to make) a Google bookmarks accelerator for IE8, but I keep running into "Internet Explorer could not install this accelerator. There was a problem with the Accelerator's information." Does anyone know why this is happening? Here is the code I've made <?xml version="1.0" encoding="UTF-8" ?> <os:openServiceDescription xmlns="http://www.microsoft.com/schemas/openservicedescription/1.0"> <os:homepageUrl>http://www.google.com/bookmarks</os:homepageUrl> <os:display> <os:description>Add this page to GoogleBookmarks</os:description> <os:name>Add to GoogleBookmarks</os:name> <os:icon>http://www.google.com/favicon.ico</os:icon> </os:display> <os:activity category="Bookmarks"> <os:activityAction context="document"> <os:execute method="get" action="http://www.google.com/bookmarks/mark?op=add&output=popup"> <os:parameter name="bkmk" value="{documentUrl}" /> <os:parameter name="title" value="{documentTitle}" /> <os:parameter name="annotation" value="{selection}" /> </os:execute> </os:activityAction> </os:activity> </os:openServiceDescription> These are the references I used: MSDN Developers Guide MSDN Format Specification Unofficial Google API (used as a look up so far) and yes i know that I cannot send document variables between schemes (HTTP - HTTPS, for example) or between servers in different security zones (Intranet - Internet etc) Also, if this is the wrong place, what are the recommended sites and/or forums for xml related posts? (not really questions - but more like full blown detailed rant like posts) Thanks

    Read the article

  • Java - Highest, Lowest and Average

    - by Emily
    Hello, I've just started studying and I need help on one of my exercises. I need the end user to input a rain fall number for each month. I then need to out put the average rainfall, highest month and lowest month and the months which rainfall was above average. I keep getting the same number in the highest and lowest and I have no idea why. I am seriously pulling my hair out. Any help would be greatly appreciated. This is what I have so far: public class rainfall { /** * @param args */ public static void main(String[] args) { int[] numgroup; numgroup = new int [13]; ConsoleReader console = new ConsoleReader(); int highest; int lowest; int index; int tempVal; int minMonth; int minIndex; int maxMonth; int maxIndex; System.out.println("Welcome to Rainfall"); for(index = 1; index < 13; index = index + 1) { System.out.println("Please enter the rainfall for month " + index); tempVal = console.readInt(); while (tempVal>100 || tempVal<0) { System.out.println("The rating must be within 0...100. Try again"); tempVal = console.readInt(); } numgroup[index] = tempVal; } lowest = numgroup[0]; for(minIndex = 0; minIndex < numgroup.length; minIndex = minIndex + 1); { if (numgroup[0] < lowest) { lowest = numgroup[0]; minMonth = minIndex; } } highest = numgroup[1]; for(maxIndex = 0; maxIndex < numgroup.length; maxIndex = maxIndex + 1); { if (numgroup[1] > highest) { highest = numgroup[1]; maxMonth = maxIndex; } } System.out.println("The average monthly rainfall was "); System.out.println("The lowest monthly rainfall was month " + minIndex); System.out.println("The highest monthly rainfall was month " + maxIndex); System.out.println("Thank you for using Rainfall"); } private static ConsoleReader ConsoleReader() { return null; } } Thanks, Emily

    Read the article

  • How can I inject Javascript (including Prototype.js) in other sites without cluttering the global na

    - by Daniel Magliola
    I'm currently on a project that is a big site that uses the Prototype library, and there is already a humongous amount of Javascript code. We're now working on a piece of code that will get "injected" into other people's sites (picture people adding a <script> tag in their sites) which will then run our code and add a bunch of DOM elements and functionality to their site. This will have new pieces of code, and will also reuse a lot of the code that we use on our main site. The problem I have is that it's of course not cool to just add a <script> that will include Prototype in people's pages. If we do that in a page that's already using ANY framework, we're guaranteed to screw everything up. jQuery gives us the option to "rename" the $ object, so it could handle this situation decently, except obviously for the fact that we're not using jQuery, so we'd have to migrate everything. Right now i'm contemplating a number of ugly choices, and I'm not sure what's best... Rewrite everything to use jQuery, with a renamed $ object everywhere. Creating a "new" Prototype library with only the subset we'd be using in "injected" code, and renaming $ to something else. Then again I'd have to adapt the parts of my code that would be shared somehow. Not using a library at all in injected code, to keep it as clean as possible, and rewriting the shared code to use no library at all. This would obviously degenerate into us creating our own frankenstein of a library, which is probably the worst case scenario ever. I'm wondering what you guys think I could do, and also whether there's some magic option that would solve all my problems... For example, do you think I could use something like Caja / Cajita to sandbox my own code and isolate it from the rest of the site, and have Prototype inside of there? Or am I completely missing the point with that? I also read once about a technique for bookmarklets, were you add your code like this: (function() { /* your code */ })(); And then your code is all inside your anonymous function and you haven't touched the global namespace at all. Do you think I could make one file containing: (function() { /* Full Code of the Prototype file here */ /* All my code that will run in the "other" site */ InitializeStuff_CreateDOMElements_AttachEventHandlers(); })(); Would that work? Would it accomplish the objective of not cluttering the global namespace, and not killing the functionality on a site that uses jQuery, for example? Or is Prototype too complex somehow to isolate it like that? (NOTE: I think I know that that would create closures everywhere and that's slower, but I don't care too much about performance, my code is not doing anything that complex)

    Read the article

  • How to merge a "branch" that isn't really a branch (wasn't created by an svn copy)

    - by MatrixFrog
    I'm working on a team with lots of people who are pretty unfamiliar with the concepts of version control systems, and are just kind of doing whatever seems to work, by trial and error. Someone created a "branch" from the trunk that is not ancestrally related to the trunk. My guess is it went something like this: They created a folder in branches. They checked out all the code from the trunk to somewhere on their desktop. They added all that code to the newly created folder as though it was a bunch of brand new files. So the repository isn't aware that all that code is actually just a copy of the trunk. When I look at the history of that branch in TortoiseSVN, and uncheck the "Stop on copy/rename" box, there is no revision that has the trunk (or any other path) under the "Copy from path" column. Then they made lots of changes on their "branch". Meanwhile, others were making lots of changes on the trunk. We tried to do a merge and of course it doesn't work. Because, the trunk and the fake branch are not ancestrally related. I can see only two ways to resolve this: Go through the logs on the "branch", look at every change that was made, and manually apply each change to the trunk. Go through the logs on the trunk, look at every change that was made between revision 540 (when the "branch" was created) and HEAD, and manually apply each change to the "branch". This involves 7 revisions one way or 11 revisions the other way, so neither one is really that terrible. But is there any way to cause the repository to "realize" that the branch really IS ancestrally related even though it was created incorrectly, so that we can take advantage of the built-in merging functionality in Eclipse/TortoiseSVN? (You may be wondering: Why did your company hire these people and allow them to access the SVN repository without making sure they knew how to use it properly first?! We didn't -- this is a school assignment, which is a collaboration between two different classes -- the ones in the lower class were given a very quick hand-wavey "overview" of SVN which didn't really teach them anything. I've asked everyone in the group to please PLEASE read the svn book, and I'll make sure we (the slightly more experienced half of the team) keep a close eye on the repository to ensure this doesn't happen again.)

    Read the article

  • PHP script works fine until I send it a parameter from HTTPService in Flash Builder 4?

    - by ben
    I'm using a PHP script to read an RSS feed in my Flex 4 app. The script works when I put the url of the feed in the actual script, but I can't get it to work when I try to send the URL as a parameter from a HTTPService in Flex. Can anyone tell me what I'm doing wrong? Here is the HTTPService from Flex 4 that I'm using: <mx:HTTPService url="http://talk.6te.net/proxy.php" id="proxyService" method="POST" result="rssResult()" fault="rssFault()"> <mx:request> <url> http://feeds.feedburner.com/nah_right </url> </mx:request> </mx:HTTPService> This is the script that works: <?php $ch = curl_init(); $timeout = 30; $userAgent = $_SERVER['HTTP_USER_AGENT']; curl_setopt($ch, CURLOPT_URL, "http://feeds.feedburner.com/nah_right"); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, $timeout); curl_setopt($ch, CURLOPT_USERAGENT, $userAgent); $response = curl_exec($ch); if (curl_errno($ch)) { echo curl_error($ch); } else { curl_close($ch); echo $response; } ?> But this is what I actually want to use, but it doesn't work: <?php $ch = curl_init(); $timeout = 30; $userAgent = $_SERVER['HTTP_USER_AGENT']; curl_setopt($ch, CURLOPT_URL, $_REQUEST['url']); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, $timeout); curl_setopt($ch, CURLOPT_USERAGENT, $userAgent); $response = curl_exec($ch); if (curl_errno($ch)) { echo curl_error($ch); } else { curl_close($ch); echo $response; } ?> Here is the request and response output of the HTTPService from the network monitor in Flash Builder 4 (using the PHP script that doesn't work): Request: POST /proxy.php HTTP/1.1 Host: talk.6te.net User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Content-type: application/x-www-form-urlencoded Content-length: 97 url=%0A%09%09%09%09%09http%3A%2F%2Ffeeds%2Efeedburner%2Ecom%2Fnah%5Fright%0A%20%20%20%20%09%09%09 Response: HTTP/1.1 200 OK Date: Mon, 10 May 2010 03:23:27 GMT Server: Apache X-Powered-By: PHP/5.2.13 Content-Length: 15 Content-Type: text/html <url> malformed I've tried putting the URL in " " in the HTTPService, but that didn't do anything. Any help would be greatly appreciated!

    Read the article

  • Logging raw HTTP request/response in ASP.NET MVC & IIS7

    - by Greg Beech
    I'm writing a web service (using ASP.NET MVC) and for support purposes we'd like to be able to log the requests and response in as close as possible to the raw, on-the-wire format (i.e including HTTP method, path, all headers, and the body) into a database. What I'm not sure of is how to get hold of this data in the least 'mangled' way. I can re-constitute what I believe the request looks like by inspecting all the properties of the HttpRequest object and building a string from them (and similarly for the response) but I'd really like to get hold of the actual request/response data that's sent on the wire. I'm happy to use any interception mechanism such as filters, modules, etc. and the solution can be specific to IIS7. However, I'd prefer to keep it in managed code only. Any recommendations? Edit: I note that HttpRequest has a SaveAs method which can save the request to disk but this reconstructs the request from the internal state using a load of internal helper methods that cannot be accessed publicly (quite why this doesn't allow saving to a user-provided stream I don't know). So it's starting to look like I'll have to do my best to reconstruct the request/response text from the objects... groan. Edit 2: Please note that I said the whole request including method, path, headers etc. The current responses only look at the body streams which does not include this information. Edit 3: Does nobody read questions around here? Five answers so far and yet not one even hints at a way to get the whole raw on-the-wire request. Yes, I know I can capture the output streams and the headers and the URL and all that stuff from the request object. I already said that in the question, see: I can re-constitute what I believe the request looks like by inspecting all the properties of the HttpRequest object and building a string from them (and similarly for the response) but I'd really like to get hold of the actual request/response data that's sent on the wire. If you know the complete raw data (including headers, url, http method, etc.) simply cannot be retrieved then that would be useful to know. Similarly if you know how to get it all in the raw format (yes, I still mean including headers, url, http method, etc.) without having to reconstruct it, which is what I asked, then that would be very useful. But telling me that I can reconstruct it from the HttpRequest/HttpResponse objects is not useful. I know that. I already said it. Please note: Before anybody starts saying this is a bad idea, or will limit scalability, etc., we'll also be implementing throttling, sequential delivery, and anti-replay mechanisms in a distributed environment, so database logging is required anyway. I'm not looking for a discussion of whether this is a good idea, I'm looking for how it can be done.

    Read the article

  • Mapscript queryByPoint return no results

    - by lucian.jp
    I have a dynamically generated mapfile made with c# mapscript that is defined like: MAP EXTENT 5.91828 45.63552 5.92346 45.65051 IMAGECOLOR 192 192 192 IMAGETYPE png SIZE 256 256 STATUS ON TRANSPARENT TRUE UNITS METERS NAME "GMAP_TILE" OUTPUTFORMAT NAME "png" MIMETYPE "image/png" DRIVER "GD/PNG" EXTENSION "png" IMAGEMODE "PC256" TRANSPARENT TRUE END SYMBOL NAME "circle" TYPE ELLIPSE FILLED TRUE POINTS 1 1 END END SYMBOL NAME ">" TYPE TRUETYPE ANTIALIAS TRUE CHARACTER ">" GAP -20 FONT "arial" POSITION CC END PROJECTION "proj=merc" "a=6378137" "b=6378137" "lat_ts=0.0" "lon_0=0.0" "x_0=0.0" "y_0=0" "units=m" "k=1.0" "nadgrids=@null" END LEGEND IMAGECOLOR 255 255 255 KEYSIZE 20 10 KEYSPACING 5 5 LABEL SIZE MEDIUM TYPE BITMAP BUFFER 0 COLOR 0 0 0 FORCE FALSE MINDISTANCE -1 MINFEATURESIZE -1 OFFSET 0 0 PARTIALS TRUE END POSITION LL STATUS OFF END QUERYMAP COLOR 255 255 0 SIZE -1 -1 STATUS ON STYLE HILITE END SCALEBAR ALIGN CENTER COLOR 0 0 0 IMAGECOLOR 255 255 255 INTERVALS 4 LABEL SIZE MEDIUM TYPE BITMAP BUFFER 0 COLOR 0 0 0 FORCE FALSE MINDISTANCE -1 MINFEATURESIZE -1 OFFSET 0 0 PARTIALS TRUE END POSITION LL SIZE 200 3 STATUS OFF STYLE 0 UNITS MILES END WEB IMAGEPATH "" IMAGEURL "" QUERYFORMAT text/html LEGENDFORMAT text/html BROWSEFORMAT text/html END LAYER NAME "Troncons" PROJECTION "proj=longlat" "ellps=WGS84" "datum=WGS84" END STATUS DEFAULT TEMPLATE "nofile.html" TOLERANCE 100 TOLERANCEUNITS METERS TYPE LINE UNITS METERS CLASS NAME "Troncons" STYLE ANGLE 360 COLOR 0 0 255 SIZE 5 SYMBOL "circle" WIDTH 5 END STYLE ANGLE 360 COLOR 0 0 0 SIZE 12 SYMBOL ">" WIDTH 1 END END FEATURE POINTS 5.91828 45.63552 5.91876 45.63611 5.91898 45.6364 5.91936 45.63701 5.91952 45.63731 5.91968 45.63762 5.91993 45.63825 5.92003 45.63856 5.92018 45.63919 5.92028 45.63983 5.92031 45.64014 5.92033 45.64046 5.92034 45.64077 5.92034 45.64108 5.92034 45.64171 5.92035 45.64234 5.92035 45.6428 5.92037 45.6433 5.9204 45.64394 5.92046 45.64458 5.92056 45.64522 5.92062 45.64554 5.92069 45.64586 5.92077 45.64617 5.92097 45.64679 5.92122 45.64739 5.92136 45.64769 5.92169 45.64828 5.92207 45.64886 5.92228 45.64914 5.92272 45.64969 5.92321 45.65023 5.92346 45.65051 END END END END I try to queryByPoint to retreive the index of the shape clciked near. In the code below I made a specific test function with fixed point instead of points passed by parameter so I am sure the point I use is actually part of a feature. In my case I use the first point of the only feature contained in mapfile. public string GetTronconId() { //_map is my dynamically created mapObj if (_map != null) for (int i = 0; i < _map.numlayers; i++) { layerObj layer = _map.getLayer(i); // Code never pass this point if (layer.queryByPoint(_map, new pointObj(5.91898, 45.6364, 0, 0), (int) MS_QUERY_MODE.MS_QUERY_MULTIPLE, 100) == (int) MS_RETURN_VALUE.MS_SUCCESS) { int numresults = layer.getNumResults(); if (numresults != 0) { layer.open(); for (int j = 0; j < numresults; j++) { resultCacheMemberObj resultat = layer.getResult(j); shapeObj shape = null; if (layer.getShape(shape, resultat.tileindex, resultat.shapeindex) == (int) MS_RETURN_VALUE.MS_SUCCESS) return shape.getValue(0); } } } } return null; } I have a dummy TEMPLATE set, I do not eveen have to use the tolerance since the point is derectly in a shape, but the queryByPoint keep returning me MS_FAILURE. From my searches on the web everything seem to be OK. Any idea?

    Read the article

  • Difference in output from use of synchronized keyword and join()

    - by user2964080
    I have 2 classes, public class Account { private int balance = 50; public int getBalance() { return balance; } public void withdraw(int amt){ this.balance -= amt; } } and public class DangerousAccount implements Runnable{ private Account acct = new Account(); public static void main(String[] args) throws InterruptedException{ DangerousAccount target = new DangerousAccount(); Thread t1 = new Thread(target); Thread t2 = new Thread(target); t1.setName("Ravi"); t2.setName("Prakash"); t1.start(); /* #1 t1.join(); */ t2.start(); } public void run(){ for(int i=0; i<5; i++){ makeWithdrawl(10); if(acct.getBalance() < 0) System.out.println("Account Overdrawn"); } } public void makeWithdrawl(int amt){ if(acct.getBalance() >= amt){ System.out.println(Thread.currentThread().getName() + " is going to withdraw"); try{ Thread.sleep(500); }catch(InterruptedException e){ e.printStackTrace(); } acct.withdraw(amt); System.out.println(Thread.currentThread().getName() + " has finished the withdrawl"); }else{ System.out.println("Not Enough Money For " + Thread.currentThread().getName() + " to withdraw"); } } } I tried adding synchronized keyword in makeWithdrawl method public synchronized void makeWithdrawl(int amt){ and I keep getting this output as many times I try Ravi is going to withdraw Ravi has finished the withdrawl Ravi is going to withdraw Ravi has finished the withdrawl Ravi is going to withdraw Ravi has finished the withdrawl Ravi is going to withdraw Ravi has finished the withdrawl Ravi is going to withdraw Ravi has finished the withdrawl Not Enough Money For Prakash to withdraw Not Enough Money For Prakash to withdraw Not Enough Money For Prakash to withdraw Not Enough Money For Prakash to withdraw Not Enough Money For Prakash to withdraw This shows that only Thread t1 is working... If I un-comment the the line saying t1.join(); I get the same output. So how does synchronized differ from join() ? If I don't use synchronize keyword or join() I get various outputs like Ravi is going to withdraw Prakash is going to withdraw Prakash has finished the withdrawl Ravi has finished the withdrawl Prakash is going to withdraw Ravi is going to withdraw Prakash has finished the withdrawl Ravi has finished the withdrawl Prakash is going to withdraw Ravi is going to withdraw Prakash has finished the withdrawl Ravi has finished the withdrawl Account Overdrawn Account Overdrawn Not Enough Money For Ravi to withdraw Account Overdrawn Not Enough Money For Prakash to withdraw Account Overdrawn Not Enough Money For Ravi to withdraw Account Overdrawn Not Enough Money For Prakash to withdraw Account Overdrawn So how does the output from synchronized differ from join() ?

    Read the article

  • WP7 silverlight custom control using popup

    - by Miloud B.
    Hi guys, I'm creating a custom datepicker, I have a textbox, once clicked it opens a calendar within a popup. What I want to do is change the size of the popup so it shows my whole calendar, but I can't manage to change it..., I've tried using Height, Width, MinHeight, MinWidth... but it doesn't work, the popup keep showing with a fixed size. The thing is that my popup's parent property isn't evaluated since it has expression issues (according to debugger), so I'm sure my popup's parent isn't the main screen( say layout grid). How can I for example make my popup open within a specific context ? This part of my code isn't XAML, it's C# code only and it looks like: using System; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Ink; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes; using System.Windows.Controls.Primitives; namespace CalendarBranch.components { public class wpDatePicker:TextBox { private CalendarPopup calendar; private Popup popup; public wpDatePicker() { this.calendar = new CalendarPopup(); this.popup = new Popup(); this.popup.Child = this.calendar; this.popup.Margin = new Thickness(0); this.MouseLeftButtonUp += new MouseButtonEventHandler(wpDatePicker_MouseLeftButtonUp); this.calendar.onDateSelect += new EventHandler(onDateSelected); this.IsReadOnly = true; } protected void wpDatePicker_MouseLeftButtonUp(object sender, MouseButtonEventArgs e) { this.popup.Height = this.calendar.Height; this.popup.Width = this.calendar.Width; this.popup.HorizontalAlignment = HorizontalAlignment.Center; this.popup.VerticalAlignment = VerticalAlignment.Center; this.popup.HorizontalOffset = 0; this.popup.VerticalOffset = 0; this.popup.MinHeight = this.calendar.Height; this.popup.MinWidth = this.calendar.Width; this.popup.IsOpen = true; } private void onDateSelected(Object sender, EventArgs ea) { this.Text = this.calendar.SelectedValue.ToShortDateString(); this.popup.IsOpen = false; } } } PS: the class Calendar is simply a UserControl that contains a grid with multiple columns, HyperLinkButtons and TextBlocks, so nothing special. Thank you in advance guys ;) Cheers Miloud B.

    Read the article

  • java NullPointerException when parsing XML

    - by behrk2
    Hi Everyone, I keep receiving a java.lang.NullPointerException while trying to parse out the values of ths tags in the following XML sample: <?xml version="1.0" standalone="yes"?> <autocomplete> <autocomplete_item> <title short="Forrest Gump"></title> </autocomplete_item> <autocomplete_item> <title short="Forrest Landis"></title> </autocomplete_item> <autocomplete_item> <title short="Finding Forrester"></title> </autocomplete_item> <autocomplete_item> <title short="Menotti: The Medium: Maureen Forrester"></title> </autocomplete_item> </autocomplete> Here is my parsing code, can anyone see where I am going wrong? Thanks! public String parse(String element) { Document doc = null; String result = null; DocumentBuilderFactory docBuilderFactory = DocumentBuilderFactory .newInstance(); DocumentBuilder docBuilder = null; try { docBuilder = docBuilderFactory.newDocumentBuilder(); } catch (ParserConfigurationException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } docBuilder.isValidating(); try { doc = docBuilder.parse(input); } catch (SAXException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } doc.getDocumentElement().normalize(); NodeList list = doc.getElementsByTagName(element); _node = new String(); _element = new String(); for (int i = 0; i < list.getLength(); i++) { Node value = list.item(i).getChildNodes().item(0); _node = list.item(i).getNodeName(); _element = value.getNodeValue(); result = _element; SearchResults searchResults = new SearchResults(); searchResults.setTitles(result); Vector test = searchResults.getTitles(); for (int p = 0; p < test.size(); p++) { System.out.println("STUFF: " + test.elementAt(p)); } }// end for return result; }

    Read the article

  • zend_navigation and modules

    - by Grant Collins
    Hi, I am developing an application at the moment with zend and I have seperated the app into modules. The default module is the main site where unlogged in users access and have free reign to look around. When you log in, depending on the user type you either go to module A or module B, which is controlled by simple ACLs. If you have access to Module A you can not access Module B and visa versa. Both user types can see the default module. Now I want to use Zend_Navigation to manage the entire applications navigation in all modules. I am not sure how to go about this, as all the examples that I have seen work within a module or very simple application. I've tried to have my navigation.xml file look like this: <configdata> <navigation> <label>Home</label> <controller>index</controller> <action>index</action> <module>default</module> <pages> <tour> <label>tour</label> <controller>tour</controller> <action>index</action> <module>default</module> </tour> <blog> <label>blog</label> <url>http://blog.mysite.com</url> </blog> <support> <label>Support</label> <controller>support</controller> <action>index</action> <module>default</module> </support> </pages> </navigation> </configdata> This if fine for the default module, but how would I go about the other modules to this navigation page? Each module has it's own home page, and others etc. Would I be better off adding a unique navigation.xml file for each module that is loaded in the preDispatch plugin that I have written to handle my ACLs?? Or keep them in one massive navigation file? Any tips would be fantastic. Thanks, Grant

    Read the article

  • Oracle Schema Design: Seperate Schema with I/O Overhead?

    - by Guru
    We are designing database schema for a new system based on Oracle 11gR1. We have identified a main schema which would have close to 100 tables, these will be accessed from the front end Java application. We have a requirement to audit the values which got changed in close to 50 tables, this has to be done every row. Which means, it is possible that, for a single row in MYSYS.T1 there might be 50 (or more) rows in MYSYS_AUDIT.T1_AUD table. We might be having old values of every column entry and new values available from T1. DBA gave an observation, advising against this method, because he said, separate schema meant an extra I/O for every operation. Basically AUDIT schema would be used only to do some analyse and enter values (thus SELECT and INSERT). Is it true that, "a separate schema means an extra I/O" ? I could not find justification. It appears logical to me, as the AUDIT data should not be tampered with, so a separate schema. Also, we designed a separate schema for archiving some tables from MYSYS. From MYSYS_ARC the table might be backed up into tapes or deleted after sufficient time. Few stats: Few tables (close to 20, 30) in MYSYS schema could grow to around 50M rows. We have asked for a total disk space of 4 TB. MYSYS_AUDIT schema might be having 10 times that of MYSYS but we wont keep them more than 3 months. Questions Given all these, can you suggest me any improvements? Separate schema affects disc I/O? (one extra I/O for every schema ?) Any general suggestions? Figure: +-------------------+ +-------------------+ | MYSYS | | MYSYS_AUDIT | | | | | | 1. T1 | | 1. T1_AUD | | 2. T2 | | 2. T2_AUD | | 3. T3 |--------->| 3. T3_AUD | | 4. T4 |(SELECT, | 4. T4_AUD | | . | INSERT) | . | | . | | . | | . | | . | | 100. T100 | | 50. T50_AUD | +-------------------+ +-------------------+ | | | | |(INSERT) | | | * +-------------------+ | MYSYS_ARC | | | | 1. T1_ARC | | 2. T2_ARC | | 3. T3_ARC | | 4. T4_ARC | | . | | . | | . | | 100. T100_ARC | +-------------------+ Apart from this, we have two more schemas with only read only rights, but mainly they are for adhoc purpose and we dont mind the performance on them.

    Read the article

  • Recursive XSLT, part 2

    - by Eric
    Ok, following on from my question here. Lets say my pages are now like this: A.xml: <page> <header>Page A</header> <content-a>Random content for page A</content-a> <content-b>More of page A's content</content-b> <content-c>More of page A's content</content-c> <!-- This doesn't keep going: there are a predefined number of sections --> </page> B.xml: <page include="A.xml"> <header>Page B</header> <content-a>Random content for page B</content-a> <content-b>More of page B's content</content-b> <content-c>More of page B's content</content-c> </page> C.xml: <page include="B.xml"> <header>Page C</header> <content-a>Random content for page C</content-a> <content-b>More of page C's content</content-b> <content-c>More of page C's content</content-c> </page> After the transform (on C.xml), I'd like to end up with this: <h1>Page C</h1> <div> <p>Random content for page C</p> <p>Random content for page B</p> <p>Random content for page A</p> </div> <div> <p>More of page C's content</p> <p>More of page B's content</p> <p>More of page A's content</p> </div> <div> <p>Yet more of page C's content</p> <p>Yet more of page B's content</p> <p>Yet more of page A's content</p> </div> I know that I can use document(@include) to include another document. However, the recursion is a bit beyond me. How would I go about writing such a transform?

    Read the article

  • Java: Reading a pdf file from URL into Byte array/ByteBuffer in an applet.

    - by Pol
    I'm trying to figure out why this particular snippet of code isn't working for me. I've got an applet which is supposed to read a .pdf and display it with a pdf-renderer library, but for some reason when I read in the .pdf files which sit on my server, they end up as being corrupt. I've tested it by writing the files back out again. I've tried viewing the applet in both IE and Firefox and the corrupt files occur. Funny thing is, when I trying viewing the applet in Safari (for Windows), the file is actually fine! I understand the JVM might be different, but I am still lost. I've compiled in Java 1.5. JVMs are 1.6. The snippet which reads the file is below. public static ByteBuffer getAsByteArray(URL url) throws IOException { ByteArrayOutputStream tmpOut = new ByteArrayOutputStream(); URLConnection connection = url.openConnection(); int contentLength = connection.getContentLength(); InputStream in = url.openStream(); byte[] buf = new byte[512]; int len; while (true) { len = in.read(buf); if (len == -1) { break; } tmpOut.write(buf, 0, len); } tmpOut.close(); ByteBuffer bb = ByteBuffer.wrap(tmpOut.toByteArray(), 0, tmpOut.size()); //Lines below used to test if file is corrupt //FileOutputStream fos = new FileOutputStream("C:\\abc.pdf"); //fos.write(tmpOut.toByteArray()); return bb; } I must be missing something, and I've been banging my head trying to figure it out. Any help is greatly appreciated. Thanks. Edit: To further clarify my situation, the difference in the file before I read then with the snippet and after, is that the ones I output after reading are significantly smaller than they originally are. When opening them, they are not recognized as .pdf files. There are no exceptions being thrown that I ignore, and I have tried flushing to no avail. This snippet works in Safari, meaning the files are read in it's entirety, with no difference in size, and can be opened with any .pdf reader. In IE and Firefox, the files always end up being corrupted, consistently the same smaller size. I monitored the len variable (when reading a 59kb file), hoping to see how many bytes get read in at each loop. In IE and Firefox, at 18kb, the in.read(buf) returns a -1 as if the file has ended. Safari does not do this. I'll keep at it, and I appreciate all the suggestions so far.

    Read the article

  • Permutations of Varying Size

    - by waiwai933
    I'm trying to write a function in PHP that gets all permutations of all possible sizes. I think an example would be the best way to start off: $my_array = array(1,1,2,3); Possible permutations of varying size: 1 1 // * See Note 2 3 1,1 1,2 1,3 // And so forth, for all the sets of size 2 1,1,2 1,1,3 1,2,1 // And so forth, for all the sets of size 3 1,1,2,3 1,1,3,2 // And so forth, for all the sets of size 4 Note: I don't care if there's a duplicate or not. For the purposes of this example, all future duplicates have been omitted. What I have so far in PHP: function getPermutations($my_array){ $permutation_length = 1; $keep_going = true; while($keep_going){ while($there_are_still_permutations_with_this_length){ // Generate the next permutation and return it into an array // Of course, the actual important part of the code is what I'm having trouble with. } $permutation_length++; if($permutation_length>count($my_array)){ $keep_going = false; } else{ $keep_going = true; } } return $return_array; } The closest thing I can think of is shuffling the array, picking the first n elements, seeing if it's already in the results array, and if it's not, add it in, and then stop when there are mathematically no more possible permutations for that length. But it's ugly and resource-inefficient. Any pseudocode algorithms would be greatly appreciated. Also, for super-duper (worthless) bonus points, is there a way to get just 1 permutation with the function but make it so that it doesn't have to recalculate all previous permutations to get the next? For example, I pass it a parameter 3, which means it's already done 3 permutations, and it just generates number 4 without redoing the previous 3? (Passing it the parameter is not necessary, it could keep track in a global or static). The reason I ask this is because as the array grows, so does the number of possible combinations. Suffice it to say that one small data set with only a dozen elements grows quickly into the trillions of possible combinations and I don't want to task PHP with holding trillions of permutations in its memory at once.

    Read the article

  • Help me stabilize this jRun configuration (CF9/Win2k3/IIS6)

    - by jfrobishow
    Not sure if this would be better suited for ServerFault, but since I am not an admin but a developer I figured I would try SO. We've been struggling to keep our multi-server configuration stable for quite some time now. At the end of last month we were running under CF 7.0.2 on a two servers setup (one instance each). At that point we managed to get our uptime to around 1 week per instance before they would restart by themselves. Since the beginning of the month we upgraded to CF 9 and we're back to square one with multi-restart a day. Our current configuration is 2 Win2k3 servers, running a cluster of 4 instances, 2 instances per server. At this point we are pretty certain this is due to improper JVM settings. We've been toying with them and while some are more stable than others we never quite got it right. From the default: java.args=-server -Xmx512m -Dsun.io.useCanonCaches=false -XX:MaxPermSize=192m -XX:+UseParallelGC -Dcoldfusion.rootDir={application.home}/ To currently: java.args=-server -Xmx896m -Dsun.io.useCanonCaches=false -XX:MaxPermSize=512m -XX:SurvivorRatio=8 -XX:TargetSurvivorRatio=90 -XX:+UseParallelGC -Dcoldfusion.rootDir={application.home}/ -verbose:gc -Xloggc:c:/Jrun4/logs/gc/gcInstance1b.log We have determined that we do need more than the default 512MB simply by monitoring with FusionReactor, on average our amount of memory consumed is hovering in the mid 300MB and can go up to low 700MB under heavy load. Most of the crash will be logged in jrun4/bin/hs_err_pid*.log always an "Out of swap space" I've attached links to the hs_err and garbage collector log file from yesterday at the bottom of the post. The relevant part is (I think) this: Heap PSYoungGen total 89856K, used 19025K [0x55490000, 0x5b6f0000, 0x5b810000) eden space 79232K, 16% used [0x55490000,0x561a64c0,0x5a1f0000) from space 10624K, 52% used [0x5ac90000,0x5b20e2f8,0x5b6f0000) to space 10752K, 0% used [0x5a1f0000,0x5a1f0000,0x5ac70000) PSOldGen total 460416K, used 308422K [0x23810000, 0x3f9b0000, 0x55490000) object space 460416K, 66% used [0x23810000,0x36541bb8,0x3f9b0000) PSPermGen total 107520K, used 106079K [0x03810000, 0x0a110000, 0x23810000) object space 107520K, 98% used [0x03810000,0x09fa7e40,0x0a110000) From it, I gather that its the PSPermGen that is full (most logs will show the same before a crash), which is why we increased MaxPermSize but the total still show as 107520K!??! No one here is a jRun expert, so any help or even ideas on what to try next would be greatly appreciated!! The log files: Sorry I know sendspace isn't the friendliest of places - if you have other host suggestion for log files let me know and I'll update the post (SO doesn't like them inline, it blows up the format of the post). The hs_err log file: http://www.sendspace.com/file/fgak8l The gc log: http://www.sendspace.com/file/w0r2ct

    Read the article

  • Any simple approaches for managing customer data change requests for global reference files?

    - by Kelly Duke
    For the first time, I am developing in an environment in which there is a central repository for a number of different industry standard reference data tables and many different customers who need to select records from these industry standard reference data tables to fill in foreign key information for their customer specific records. Because these industry standard reference files are utilized by all customers, I want to reserve Create/Update/Delete access to these records for global product administrators. However, I would like to implement a (semi-)automated interface by which specific customers could request record additions, deletions or modifications to any of the industry standard reference files that are shared among all customers. I know I need something like a "data change request" table specifying: user id, user request datetime, request type (insert, modify, delete), a user entered text explanation of the change request, the user request's current status (pending, declined, completed), admin resolution datetime, admin id, an admin entered text description of the resolution, etc. What I can't figure out is how to elegantly handle the fact that these data change requests could apply to dozens of different tables with differing table column definitions. I would like to give the customer users making these data change requests a convenient way to enter their proposed record additions/modifications directly into CRUD screens that look very much like the reference table CRUD screens they don't have write/delete permissions for (with an additional text explanation and perhaps request priority field). I would also like to give the global admins a tool that allows them to view all the outstanding data change requests for the users they oversee sorted by date requested or user/date requested. Upon selecting a data change request record off the list, the admin would be directed to another CRUD screen that would be populated with the fields the customer users requested for the new/modified industry standard reference table record along with customer's text explanation, the request status and the text resolution explanation field. At this point the admin could accept/edit/reject the requested change and if accepted the affected industry standard reference file would be automatically updated with the appropriate fields and the data change request record's status, text resolution explanation and resolution datetime would all also be appropriately updated. However, I want to keep the actual production reference tables as simple as possible and free from these extraneous and typically null customer change request fields. I'd also like the data change request file to aggregate all data change requests across all the reference tables yet somehow "point to" the specific reference table and primary key in question for modification & deletion requests or the specific reference table and associated customer user entered field values in question for record creation requests. Does anybody have any ideas of how to design something like this effectively? Is there a cleaner, simpler way I am missing? Thank you so much for reading.

    Read the article

  • GUI Application becomes unresponsive while http request is being done

    - by JW
    I have just made my first proper little desktop GUI application that basically wraps a GUI (php-gtk) interface around a SimpleTest Web Test case to make it act as a remote testing client. Each time the local The Web Test case runs, it sends an HTTP request to another SimpleTest case (that has an XHTML interface) sitting on my server. The application, allows me to run one local test that collates information from multiple remote tests. It just has a 'Start Test' button, 'Stop Test' button and a setting to increase/decrease the number of remote tests conducted in each HTTP Request. Each test-run takes about an hour to complete. The trouble is, most of the time the application is making http requests. Furthermore, whenever an HTTP Request is being made, the application's GUI is unresponsive. I have taken to making the application wait a few seconds (iterating through the Gtk::main_iteration ) between requests in order to give the user time to re-size the window, press the Stop button, etc. But, this makes the whole test run take a lot a longer than is necessary. <?php require_once('simpletest/web_tester.php'); class TestRemoteTestingClient extends WebTestCase { function testRunIterations() { ... $this->assertTrue($this->get($nextUrl), 'getting from pointer:'. $this->_remoteMementoPointer); $this->assertResponse(200, "checking response for " . $nextUrl ); $this->assertText('RemoteNodeGreen'); $this->doGtkIterationsForMinNSeconds($secs); ... } public function doGtkIterationsForMinNSeconds($secs) { $this->appendStatusMessage("Waiting " . $secs); $start = time(); $end = $start + $secs; while( (time() < $end) ) { $this->appendStatusMessage("Waiting " . ($end - time())); while(gtk::events_pending()) Gtk::main_iteration(); } } } Is there a way to keep the application responsive whilst, at the same time making an HTTP request? I am considering splitting the application into two, where: Test Controller Application - Acts as a settings-writer / report-reader and this writes to settings file and reads a report file. Test Runner Application - Acts as a settings-reader / report-writer and, for each iteration reads the settings file, Runs the test, then write a report. So to tell it to close down - I'd: Press the Stop Button on the 'Test Controller Application', which writes to the settings file, which is read by the 'Test Runner Application' which stops, then writes to the report file to say it stopped the 'Test Controller Application' reads the report and updates the status and so on... However, before I go ahead and split the application in two - I am wondering if there is any other obvious way to deal with, this issue. I suspect it is probably quite common and a well-trodden path. Also is there an easier way to send messages between two applications?

    Read the article

  • design for a wrapper around command-line utilities

    - by hatchetman82
    im trying to come up with a design for a wrapper for use when invoking command line utilities in java. the trouble with runtime.exec() is that you need to keep reading from the process' out and err streams or it hangs when it fills its buffers. this has led me to the following design: public class CommandLineInterface { private final Thread stdOutThread; private final Thread stdErrThread; private final OutputStreamWriter stdin; private final History history; public CommandLineInterface(String command) throws IOException { this.history = new History(); this.history.addEntry(new HistoryEntry(EntryTypeEnum.INPUT, command)); Process process = Runtime.getRuntime().exec(command); stdin = new OutputStreamWriter(process.getOutputStream()); stdOutThread = new Thread(new Leech(process.getInputStream(), history, EntryTypeEnum.OUTPUT)); stdOutThread.setDaemon(true); stdOutThread.start(); stdErrThread = new Thread(new Leech(process.getErrorStream(), history, EntryTypeEnum.ERROR)); stdErrThread.setDaemon(true); stdErrThread.start(); } public void write(String input) throws IOException { this.history.addEntry(new HistoryEntry(EntryTypeEnum.INPUT, input)); stdin.write(input); stdin.write("\n"); stdin.flush(); } } public class Leech implements Runnable{ private final InputStream stream; private final History history; private final EntryTypeEnum type; private volatile boolean alive = true; public Leech(InputStream stream, History history, EntryTypeEnum type) { this.stream = stream; this.history = history; this.type = type; } public void run() { BufferedReader reader = new BufferedReader(new InputStreamReader(stream)); String line; try { while(alive) { line = reader.readLine(); if (line==null) break; history.addEntry(new HistoryEntry(type, line)); } } catch (Exception e) { e.printStackTrace(); } } } my issue is with the Leech class (used to "leech" the process' out and err streams and feed them into history - which acts like a log file) - on the one hand reading whole lines is nice and easy (and what im currently doing), but it means i miss the last line (usually the prompt line). i only see the prompt line when executing the next command (because there's no line break until that point). on the other hand, if i read characters myself, how can i tell when the process is "done" ? (either complete or waiting for input) has anyone tried something like waiting 100 millis since the last output from the process and declaring it "done" ? any better ideas on how i can implement a nice wrapper around things like runtime.exec("cmd.exe") ?

    Read the article

< Previous Page | 566 567 568 569 570 571 572 573 574 575 576 577  | Next Page >