Search Results

Search found 8219 results on 329 pages for 'more or less'.

Page 233/329 | < Previous Page | 229 230 231 232 233 234 235 236 237 238 239 240  | Next Page >

  • JEE6 Global JNDI Name and Maven Deployment

    - by wobblycogs
    I'm having some problems with the global JNDI names of my EJB resources which is (or at least will) cause my JNDI look ups to fail. The project is being developed on Netbeans and is a standard Maven Web Application. When my application is deployed to GF3.0 the application name is set to something like: com.example_myapp_war_1.0-SNAPSHOT which is all well and good from Netbeans point of view because it ensures the name is unique but it also means all the EJBs get global names such as this: java:global/com.example_myapp_war_1.0-SNAPSHOT/CustomerService This, of course, is going to cause problems because every time the version changes all the global names change (I've tested this by changing the version and the names indeed changed). The name is being generated from the POM file and it's a concatenation of: <groupId>com.example</groupId> <artifactId>myapp</artifactId> <packaging>war</packaging> <version>1.0-SNAPSHOT</version> Up until now I've got away with just injecting all the resources using @EJB but now I need to access the CustomerService EJB from a JSF Converter so I'm doing a JNDI look up like this: try { Context ctx = new InitialContext(); CustomerService customerService = (CustomerService)ctx.lookup( "java:global/com.example_myapp_war_1.0-SNAPSHOT/CustomerService" ); return customerService.get(submittedValue); } catch( Exception e ) { logger.error( "Failed to convert customer.", e ); return null; } which will clearly break when the application is properly released and the module name changes. So, the million dollar question: how can I set the modle name in maven or how do I recover the module name so that I can programatically build the JNDI name at runtile. I've tried setting it in the web.xml file as suggested by that link but it was ignored. I think I'd rather build the name at runtime as that means there is less scope for screw ups when the application is deployed. Many thanks for any help, I've been tearing my hair out all day on this.

    Read the article

  • Emacs recursive project search

    - by hekevintran
    I am switching to Emacs from TextMate. One feature of TextMate that I would really like to have in Emacs is the "Find in Project" search box that uses fuzzy matching. Emacs sort of has this with ido, but ido does not search recursively through child directories. It searches only within one directory. Is there a way to give ido a root directory and to search everything under it? Update: The questions below pertain to find-file-in-project.el from Michal Marczyk's answer. If anything in this message sounds obvious it's because I have used Emacs for less than one week. :-) As I understand it, project-local-variables lets me define things in a .emacs-project file that I keep in my project root. How do I point find-file-in-project to my project root? I am not familiar with regex syntax in Emacs Lisp. The default value for ffip-regexp is: ".*\\.\\(rb\\|js\\|css\\|yml\\|yaml\\|rhtml\\|erb\\|html\\|el\\)" I presume that I can just switch the extensions to the ones appropriate for my project. Could you explain the ffip-find-options? From the file: (defvar ffip-find-options "" "Extra options to pass to `find' when using find-file-in-project. Use this to exclude portions of your project: \"-not -regex \\".vendor.\\"\"") What does this mean exactly and how do I use it to exclude files/directories? Could you share an example .emacs-project file?

    Read the article

  • java converting int to short

    - by changed
    Hi I am calculating 16 bit checksum on my data which i need to send to server where it has to recalculate and match with the provided checksum. Checksum value that i am getting is in int but i have only 2 bytes for sending the value.So i am casting int to short while calling shortToBytes method. This works fine till checksum value is less than 32767 thereafter i am getting negative values. Thing is java does not have unsigned primitives, so i am not able to send values greater than max value of signed short allowed. How can i do this, converting int to short and send over the network without worrying about truncation and signed & unsigned int. Also on both the side i have java program running. private byte[] shortToBytes(short sh) { byte[] baValue = new byte[2]; ByteBuffer buf = ByteBuffer.wrap(baValue); return buf.putShort(sh).array(); } private short bytesToShort(byte[] buf, int offset) { byte[] baValue = new byte[2]; System.arraycopy(buf, offset, baValue, 0, 2); return ByteBuffer.wrap(baValue).getShort(); }

    Read the article

  • Perl unpack in list context

    - by drewk
    A common 'Perlism' is generating a list as something to loop over in this form: for($str=~/./g) { print "the next character from \"$str\"=$_\n"; } In this case the global match regex returns a list that is one character in turn from the string $str, and assigns that value to $_ Instead of a regex, split can be used in the same way or 'a'..'z', map, etc. I am investigating unpack to generate a field by field interpretation of a string. I have always found unpack to be less straightforward to the way my brain works, and I have never really dug that deeply into it. As a simple case, I want to generate a list that is one character in each element from a string using unpack (yes -- I know I can do it with split(//,$str) and /./g but I really want to see if unpack can be used this way...) Obviously, I can use a field list for unpack that is unpack("A1" x length($str), $str) but is there some other way that kinda looks like globbing? ie, can I call unpack(some_format,$str) either in list context or in a loop such that unpack will return the next group of character in the format group until $str is exausted? I have read The Perl 5.12 Pack pod and the Perl 5.12 pack tutorial and the Perkmonks tutorial Here is the sample code: #!/usr/bin/perl use warnings; use strict; my $str=join('',('a'..'z', 'A'..'Z')); #the alphabet... $str=~s/(.{1,3})/$1 /g; #...in groups of three print "str=$str\n\n"; for ($str=~/./g) { print "regex: = $_\n"; } for(split(//,$str)) { print "split: \$_=$_\n"; } for(unpack("A1" x length($str), $str)) { print "unpack: \$_=$_\n"; }

    Read the article

  • Java+DOM: How do I convert a DOM tree without namespaces to a namespace-aware DOM tree?

    - by java.is.for.desktop
    Hello, everyone! I receive a Document (DOM tree) from a certain API (not in JDK). Sadly, this Document is not namespace-aware. As far as I know DOM, once generated, namespace-awareness can't be "added" afterwards. When converting this Document using a Transformer to a string, the XML is correct. Elements have xmlns:... attributes and name prefixes. But from the DOM point of view, there are no namespaces and no prefixes. I need to be able to convert this Document into a new Document which is namespace-aware. Yes, I could do this by just converting it to a string and back to DOM with namespaces enabled. But: nodes of the original tree have user-objects set. Converting to string and back would make a mapping of these user-objects to the new Document very complicated, if not impossible. So I really need a way to convert non-namespace DOM to namespace DOM. Are there any more-or-less straightforward solutions for this? Worst case (what I'm hoping to avoid) would be to manually iterate through old Document tree and create new namespace-aware Node for each old Node. Doing so, one had to manually "parse" namespace prefixes, watch out for xmlns-attributes, and maintain a mapping between prefixes and namespace-URIs. Lots of things to go wrong.

    Read the article

  • Lucene.net create+lock errors in ASP.NET

    - by acidzombie24
    I have an issue with Lucene.net. It throws a lock exception. After poking around i notice these things. My code below works in an app bit when calling in Application_Start i get a NoSuchDirectoryException. Not closing the writer (as my code doesnt do below) i WILL get a LockObtainFailedException with the message Lock obtain timed out: SimpleFSLock@<FULL_PATH> from either app or asp.net These thread hinted when spawning threads they get less permissions then i do (but! my main thread has problems as well...) and one solution is to impersonate IIS. I am using visual studios 2010. I am not sure how full blown it is but my attempt to impersonate it failed. So my question is how do i have lucene create the directory and not throw an exception if dont close the writer for some reason (such as power going out)? http://stackoverflow.com/questions/2341163/why-is-my-lucene-index-getting-locked/2499285#2499285 http://stackoverflow.com/questions/1123517/lucene-net-and-i-o-threading-issue/1123981#1123981 static IndexWriter writer = null; static void lucene_init() { bool create = false; string dirname = "LuceneIndex_z"; if (System.IO.Directory.Exists(dirname) == false) create = true; var directory = FSDirectory.GetDirectory(dirname); var analyzer = new StandardAnalyzer(); writer = new IndexWriter(directory, analyzer, create); }

    Read the article

  • LINQ to SQL : Too much CPU Usage: What happens when there are multiple users.

    - by soldieraman
    I am using LINQ to SQL and seeing my CPU Usage sky rocketting. See below screenshot. I have three questions What can I do to reduce this CPU Usage. I have done profiling and basically removed everything. Will making every LINQ to SQL statement into a compiled query help? I also find that even with compiled queries simple statements like ByID() can take 3 milliseconds on a server with 3.25GB RAM 3.17GHz - this will just become slower on a less powerful computer. Or will the compiled query get faster the more it is used? The CPU Usage (on the local server goes to 12-15%) for a single user will this multiply with the number of users accessing the server - when the application is put on a live server. i.e. 2 users at a time will mean 15*2 = 30% CPU Usage. If this is the case is my application limited to maximum 4-5 users at a time then. Or doesnt LINQ to SQL .net share some CPU usage.

    Read the article

  • Lucene.net create+lock errors in ASP.NET

    - by acidzombie24
    I have an issue with Lucene.net. It throws a lock exception. After poking around i notice these things. My code below works in an app bit when calling in Application_Start i get a NoSuchDirectoryException. Not closing the writer (as my code doesnt do below) i WILL get a LockObtainFailedException with the message Lock obtain timed out: SimpleFSLock@<FULL_PATH> from either app or asp.net These thread hinted when spawning threads they get less permissions then i do (but! my main thread has problems as well...) and one solution is to impersonate IIS. I am using visual studios 2010. I am not sure how full blown it is but my attempt to impersonate it failed. So my question is how do i have lucene create the directory and not throw an exception if dont close the writer for some reason (such as power going out)? http://stackoverflow.com/questions/2341163/why-is-my-lucene-index-getting-locked/2499285#2499285 http://stackoverflow.com/questions/1123517/lucene-net-and-i-o-threading-issue/1123981#1123981 static IndexWriter writer = null; static void lucene_init() { bool create = false; //I now use a full path. I still get NoSuchDirectoryException //string dirname = "LuceneIndex_z"; if (System.IO.Directory.Exists(dirname) == false) create = true; var directory = FSDirectory.GetDirectory(dirname); var analyzer = new StandardAnalyzer(); writer = new IndexWriter(directory, analyzer, create); }

    Read the article

  • Spikes in Socket Performance

    - by Harun Prasad
    We are facing random spikes in high throughput transaction processing system using sockets for IPC. Below is the setup used for the run: The client opens and closes new connection for every transaction, and there are 4 exchanges between the server and the client. We have disabled the TIME_WAIT, by setting the socket linger (SO_LINGER) option via getsockopt as we thought that the spikes were caused due to the sockets waiting in TIME_WAIT. There is no processing done for the transaction. Only messages are passed. OS used Centos 5.4 The average round trip time is around 3 milli seconds, but some times the round trip time ranges from 100 milli seconds to couple of seconds. Steps used for Execution and Measurement and output Starting the server $ python sockServerLinger.py /dev/null & Starting the client to post 1 million transactions to the server. And logs the time for a transaction in the client.log file. $ python sockClient.py 1000000 client.log Once the execution finishes the following command will show the execution time greater than 100 milliseconds in the format <line_number>:<execution_time>. $ grep -n "0.[1-9]" client.log | less Below is the example code for Server and Client. Server # File: sockServerLinger.py import socket, traceback,time import struct host = '' port = 9999 l_onoff = 1 l_linger = 0 lingeropt = struct.pack('ii', l_onoff, l_linger) s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, lingeropt) s.bind((host, port)) s.listen(1) while 1: try: clientsock, clientaddr = s.accept() print "Got connection from", clientsock.getpeername() data = clientsock.recv(1024*1024*10) #print "asdasd",data numsent=clientsock.send(data) data1 = clientsock.recv(1024*1024*10) numsent=clientsock.send(data) ret = 1 while(ret>0): data1 = clientsock.recv(1024*1024*10) ret = len(data) clientsock.close() except KeyboardInterrupt: raise except: print traceback.print_exc() continue Client # File: sockClient.py import socket, traceback,sys import time i = 0 while 1: try: st = time.time() s = socket.socket(socket.AF_INET,socket.SOCK_STREAM) while (s.connect_ex(('127.0.0.1',9999)) != 0): continue numsent=s.send("asd"*1000) response = s.recv(6000) numsent=s.send("asd"*1000) response = s.recv(6000) i+=1 if i == int(sys.argv[1]): break except KeyboardInterrupt: raise except: print "in exec:::::::::::::",traceback.print_exc() continue print time.time() -st

    Read the article

  • Bit reversal of an integer, ignoring integer size and endianness

    - by ??O?????
    Given an integer typedef: typedef unsigned int TYPE; or typedef unsigned long TYPE; I have the following code to reverse the bits of an integer: TYPE max_bit= (TYPE)-1; void reverse_int_setup() { TYPE bits= (TYPE)max_bit; while (bits <<= 1) max_bit= bits; } TYPE reverse_int(TYPE arg) { TYPE bit_setter= 1, bit_tester= max_bit, result= 0; for (result= 0; bit_tester; bit_tester>>= 1, bit_setter<<= 1) if (arg & bit_tester) result|= bit_setter; return result; } One just needs first to run reverse_int_setup(), which stores an integer with the highest bit turned on, then any call to reverse_int(arg) returns arg with its bits reversed (to be used as a key to a binary tree, taken from an increasing counter, but that's more or less irrelevant). Is there a platform-agnostic way to have in compile-time the correct value for max_int after the call to reverse_int_setup(); Otherwise, is there an algorithm you consider better/leaner than the one I have for reverse_int()? Thanks.

    Read the article

  • Looking for ideas for a simple pattern matching algorithm to run on a microcontroller

    - by pic_audio
    I'm working on a project to recognize simple audio patterns. I have two data sets, each made up of between 4 and 32 note/duration pairs. One set is predefined, the other is from an incoming data stream. The length of the two strongly correlated data sets is often different, but roughly the same "shape". My goal is to come up with some sort of ranking as to how well the two data sets correlate/match. I have converted the incoming frequencies to pitch and shifted the incoming data stream's pitch so that it's average pitch matches that of the predefined data set. I also stretch/compress the incoming data set's durations to match the overall duration of the predefined set. Here are two graphical examples of data that should be ranked as strongly correlated: http://s2.postimage.org/FVeG0-ee3c23ecc094a55b15e538c3a0d83dd5.gif (Sorry, as a new user I couldn't directly post images) I'm doing this on a 8-bit microcontroller so resources are minimal. Speed is less an issue, a second or two of processing isn't a deal breaker. It wouldn't surprise me if there is an obvious solution, I've just been staring at the problem too long. Any ideas? Thanks in advance...

    Read the article

  • Finding if a Binary Tree is a Binary Search Tree

    - by dharam
    Today I had an interview where I was asked to write a program which takes a Binary Tree and returns true if it is also a Binary Search Tree otherwise false. My Approach1: Perform an inroder traversal and store the elements in O(n) time. Now scan through the array/list of elements and check if element at ith index is greater than element at (i+1)th index. If such a condition is encountered, return false and break out of the loop. (This takes O(n) time). At the end return true. But this gentleman wanted me to provide an efficient solution. I tried but I was unsuccessfult, because to find if it is a BST I have to check each node. Moreover he was pointing me to think over recusrion. My Approach 2: A BT is a BST if for any node N N-left is < N and N-right N , and the INorder successor of left node of N is less than N and the inorder successor of right node of N is greater than N and the left and right subtrees are BSTs. But this is going to be complicated and running time doesn't seem to be good. Please help if you know any optimal solution. Thanks.

    Read the article

  • Android: Dialog themed activity not visible

    - by Vincent
    I have an activity which, when started, needs to check if the user is authenticated. If not, I need to display an interface to authenticate. I do this with another activity, which has a dialog theme, and I start it in onResume() with flags NO_HISTORY and EXCLUDE_FROM_RECENTS. Everything works fine when starting the application for the first time. But I have a feature that resets login after some time, if the user is not in an activity. When I test this, I start the applicatio, enter the password, then move back to home. Then when I enter the application again, the background darkens as if the dialog would show, but it doesn't. At this point, if I press the back button, the darkening from the background activity disappears for a second, then the dialog finally appears. I used logcat to investigate the case, and the activity lifecycle functions get called properly: //For the first start: onCreate background activity onStart background activity onResume background activity onPause background activity onCreate dialog onStart dialog onResume dialog //Enter password onPause dialog onResume background activity onStop dialog onDestroy dialog //navigating to homescreen onPause background activity onStop background activity //starting again onRestart background activity onStart background activity onResume background activity onPause background activity onCreate dialog onStart dialog onResume dialog //no dialog shown, only darkened background activity recieving no input //pressing back button onPause dialog onResume background activity onPause background activity onCreate NEW dialog onStart NEW dialog onResume NEW dialog onStop OLD dialog onDestroy OLD dialog //now the dialog is properly shown //entering password onPause NEW dialog onResume background activity onStop NEW dialog onDestroy NEW dialog Using the SINGLE_TOP flag makes no change. However, if I remove the dialog theme from the dialog activity, it IS shown after the restart. So far I didn't want to use a Dialog instead of an Activity, because I consider them problematic sometimes and less encapsulated and this part has to be quite secure. You may be able to convince me though.. Thank you in advance!

    Read the article

  • How to weed out the bad programmers from the competent ones in the interview process

    - by thaBadDawg
    I am getting ready to add another developer to my team and I want to try and fix the mistakes I made in my last hiring cycle. I like to think of myself as a competent programmer (I can be given a project, I can deliver on that project and the deliverable work with very few if any bugs) and so I ask questions that I would ask myself in an interview. I've come to the conclusion that my interviewing skills are completely lacking because the last two people I've hired interviewed incredibly well but have been less than ideal at the tasks that they've been given. My CTO (who was completely useless in giving any guidance as to how) suggested I improve on my interviewing skills. The question is this - How does one programmer interview another programmer and get an understanding of the other programmer's abilities? Edit: Though slightly different, the answers provided to this question could be of use to you. That question concerns specific interview questions while yours seems to be more general about interview approaches and not just about the questions themselves. Update: Just for the hell of it I asked two of the guys I worked with if they could do FizzBuzz. 45 minutes and 80 minutes to work it out. And these aren't bottom level guys either.

    Read the article

  • Extracting a specific value from command-line output using powershell

    - by Andrew Shepherd
    I've just started learning PowerShell this morning, and I'm grappling with the basics. Here's the problem: I want to query subversion for the revision number of a repository, and then create a new directory with this revision number. The command svn_info outputs a number of label\value pairs, delimited by colons. For example Path: c# URL: file:///%5Copdy-doo/Archive%20(H)/Development/CodeRepositories/datmedia/Development/c%23 Repository UUID: b1d03fca-1949-a747-a1a0-046c2429f46a Revision: 58 Last Changed Rev: 58 Last Changed Date: 2011-01-12 11:36:12 +1000 (Wed, 12 Jan 2011) Here is my current code which solves the problem. Is there a way to do this with less code? I'm thinking that with the use of piping you could do this with one or two lines. I certainly shouldn't have to use a temporary file. $RepositoryRoot = "file:///%5Cdat-ftp/Archive%20(H)/Development/CodeRepositories/datmedia" $BuildBase="/Development/c%23" $RepositoryPath=$RepositoryRoot + $BuildBase # Outputing the info into a file svn info $RepositoryPath | Out-File -FilePath svn_info.txt $regex = [regex] '^Revision: (\d{1,4})' foreach($info_line in get-content "svn_info.txt") { $match = $regex.Match($info_line); if($match.Success) { $revisionNumber = $match.Groups[1]; break; } } "Revision number = " + $revisionNumber;

    Read the article

  • Sending data through POST request from a node.js server to a node.js server

    - by Masiar
    I'm trying to send data through a POST request from a node.js server to another node.js server. What I do in the "client" node.js is the following: var options = { host: 'my.url', port: 80, path: '/login', method: 'POST' }; var req = http.request(options, function(res){ console.log('status: ' + res.statusCode); console.log('headers: ' + JSON.stringify(res.headers)); res.setEncoding('utf8'); res.on('data', function(chunk){ console.log("body: " + chunk); }); }); req.on('error', function(e) { console.log('problem with request: ' + e.message); }); // write data to request body req.write('data\n'); req.write('data\n'); req.end(); This chunk is taken more or less from the node.js website so it should be correct. The only thing I don't see is how to include username and password in the options variable to actually login. This is how I deal with the data in the server node.js (I use express): app.post('/login', function(req, res){ var user = {}; user.username = req.body.username; user.password = req.body.password; ... }); How can I add those username and password fields to the options variable to have it logged in? Thanks

    Read the article

  • Image Line Trace Math Help Hard To Explain

    - by Ozzy
    Hi all, sorry for the confusing title, its really hard for me to explain what i want. So i created this image :) Ok so the two RED dots are points on an image. The distance between them isnt important. What I want to do is, Using the coordinates for the two dots, work out the angle of the space between them (as shown by the black line between the red dots) Then once the angle is found, on the last red dot, create two points which cross the angle of the first line. Then from that, scan a Half semicircle and get the coordinates of every pixel of the image that the orange line passes. I dnot know if this makes any sense to you lot so i drew another picture: As you can see in the second picture, my idea is applied to a line drawn on a black canavs. The two red dots are the starting coordinates then at the end of the two dots, a less then half semicircle is created. The part that is orange shows the pixels of the image that should be recorded. I have no clue how to start this, so if anyone has any ideas on how i can or on what i need to do, any help is much appreciated :)

    Read the article

  • Explicit behavior with checks vs. implicit behavior

    - by Silviu
    I'm not sure how to construct the question but I'm interested to know what do you guys think of the following situations and which one would you prefer. We're working at a client-server application with winforms. And we have a control that has some fields automatically calculated upon filling another field. So we're having a field currency which when filled by the user would determine an automatic filling of another field, maybe more fields. When the user fills the currency field, a Currency object would be retrieved from a cache based on the string introduced by the user. If entered currency is not found in the cache a null reference is returned by the cache object. Further down when asking the application layer to compute the other fields based on the currency, given a null currency a null specific field would be returned. This way the default, implicit behavior is to clear all fields. Which is the expected behavior. What i would call the explicit implementation would be to verify that the Currency object is null in which case the depending fields are cleared explicitly. I think that the latter version is more clear, less error prone and more testable. But it implies a form of redundancy. The former version is not as clear and it implies a certain behavior from the application layer which is not expressed in the tests. Maybe in the lower layer tests but when the need arises to modify the lower layers, so that given a null currency something else should be returned, i don't think a test that says just that without a motivation is going to be an impediment for introducing a bug in upper layers. What do you guys think?

    Read the article

  • Capture keystrokes (e.g., function keys) while a messagebox is up

    - by FastAl
    We have a large WinForms app, and there is a built-in bug reporting system that can be activated during testing via the F5 Key. I am capturing the F5 key with .Net's PreFilterMessage system. This works fine on the main forms, modal dialog boxes, etc. Unfortunately, the program also displays windows messageboxes when it needs to. When there is a bug with that, e.g., wrong text in the messagebox or it shouldn't be there, the messagefilter isn't executed at all when the messagebox is up! I realize I could fix it by either rewriting my own messagebox routine, or kicking off a separate thread that polls GetAsyncKeyState and calls the error reporter from there. However I was hoping for a method that was less of a hack. Here's code that manifests the problem: Public Class Form1 Implements IMessageFilter Private Sub Form1_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Click MsgBox("now, a messagebox is up!") End Sub Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load Application.AddMessageFilter(Me) End Sub Public Function PreFilterMessage(ByRef m As System.Windows.Forms.Message) _ As Boolean Implements IMessageFilter.PreFilterMessage Const VK_F5 As Int32 = &H74 Const WM_KEYDOWN As Integer = &H100 If m.Msg = WM_KEYDOWN And m.WParam.ToInt32 = VK_F5 Then ' In reality code here takes a screenshot, saves the program state, and shows a bug report interface ' IO.File.AppendAllText("c:\bugs.txt", InputBox("Describe the bug:")) End If End Function End Class Many thanks.

    Read the article

  • Why is Raphael.JS creating paper with dimensions 1000x1000?

    - by Bryan
    I have a demo using raphael.js. The code for it is very simple but when viewed in Internet Explorer (less that version 9) I get a Raphael canvas that is 1000px by 1000px and I can't figure out why. I'm using version 1.5.2 of Raphael. Code below: HTML <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title>Raphael Test</title> <link rel="stylesheet" type="text/css" href="test.css"> <link href="../shared/img/favicon.png" rel="shortcut icon"> </head> <body> <div id="graph"></div> <script src="../shared/js/raphael/raphael-min.js" type="text/javascript"> </script> <script src="test.js" type="text/javascript"> </script> </body> </html> CSS /* Graph */ #graph { padding: 5px; width: 477px; height: 299; } JS var holder = document.getElementById('graph') , width = holder.scrollWidth , height = Math.round(width * 0.5625) + 25 , p = Raphael(10, 50, width, height) , c = p.circle(p.width - 50, p.height - 50, 50); alert(p.width + ' & ' + p.height); I found a discussion in Raphael's Google group with the same problem but no resolution.

    Read the article

  • Why does ICEfaces send dispose-window request on page unload when using view-scoped bean?

    - by woflrevo
    in our application ICEfaces always sends a dispose-window request just before navigating to another JSF Page. as much as i understand this should not happen when having org.icefaces.lazyWindowScope set to true and there is no window-scoped bean involved in current request. but it happens on each link and makes our UI less responsive. but we don't have any window-scoped bean in our application. is that a bug in icefaces that the dispose request is sent when using view-scoped beans? Is it possible to disable? ViewScope is defined in JSF not in ICEfaces, it should work without this dispose request i guess... @ManagedBean(name="viewScopeBean") @ViewScoped public class ViewScopeBean { public void doSomething(){ // } } And here the example jsf: <ice:form> <ice:commandButton value="doSomething" action="#{viewScopeBean.doSomething}"/> <h:link outcome="index" value="Link to same page"/> </ice:form> To reproduce do the following using the code above: open firebug's net tab and activate persist option click doSomething-Button click "link to same page" = dispose-window will be send before navigation Dispose Request Parameters: ice.submit.type=ice.dispose.window ice.window=4guthcbue javax.faces.ViewState=-8138151632882151449%3A-6709064564386098402 Environment: ICEfaces-EE 2.0.0.GA ICEpush-EE 2.0.0.GA Mojarra 2.1.1 JRockit 1.6.0_22 WebLogic Server 10.3.4.0 ICEfaces Configuration: org.icefaces.render.auto: true [default] org.icefaces.autoid: true [default] org.icefaces.aria.enabled: true [default] org.icefaces.blockUIOnSubmit: false [default] org.icefaces.compressDOM: false [default] org.icefaces.compressResources: true [default] org.icefaces.connectionLostRedirectURI: /pages/main.jsf org.icefaces.deltaSubmit: false [default] org.icefaces.lazyPush: true [default] org.icefaces.sessionExpiredRedirectURI: /pages/main.jsf org.icefaces.standardFormSerialization: false [default] org.icefaces.strictSessionTimeout: false [default] org.icefaces.windowScopeExpiration = 1000 [default] org.icefaces.mandatoryResourceConfiguration: null [default] org.icefaces.uniqueResourceURLs: true [default] org.icefaces.lazyWindowScope: true [default] org.icefaces.disableDefaultErrorPopups: false [default]

    Read the article

  • Crosstab/Cube/Pivot Components for Delphi

    - by Anagoge
    I'm looking for a Delphi VCL crosstab/cube/pivotcube/olap grid component for Delphi 2009, 2010, or XE. I'm willing to sacrifice advanced features to get something open/free (or very cheap if I must) to make it easier to collaborate with any future developers without anyone having to purchase more components than I already use, since this will just be used in one screen. If there isn't anything appropriate out there, I may try to implement something simple on my own. I can live with some fairly basic features: drag and drop to configure dimensions, sort by a column, allow totals/min/max for a column, and (optionally) expand/collapse or drill down to sub-categories. Blazing performance and enterprise scalability are not required, since there should be less than 2000 source rows. There appear to be several decent options in the commercial space (ExpressPivotCube, FastCube, HierCube), but they are all a few hundred dollars. This project already uses existing installations of Excel 2007 and SQL Server 2005/2008, so I might consider leveraging those, though I'd prefer a native Delphi component, if possible. There are also the very old Decision Cube components included in Delphi's Source\xtab directory, but they apparently no longer support unicode compilers (Delphi 2009+), since I got dozens of unicode-related compilation errors while test compiling that source in Delphi XE. Those components also still link to the long-deprecated BDE! Has anyone modified Decision Cube to support unicode/pure-TDataSet? The online tutorials I found were incomplete and silent on the dozens of BDE/unicode compilation errors I see, so I might have to tackle that on my own. Does anyone have suggestions where to start for a free/cheap basic crosstab/pivot grid component?

    Read the article

  • ServerIdentity memory leak with IHttpAsyncHandler

    - by Anton
    I have a .NET web application that consists of a single HTTP handler class that implements IHttpAsyncHandler. All requests to this handler are handled asynchronously, though some requests are short-lived and some are long-lived (nothing over a few seconds). The problem is that memory consumption grows over time as requests are handled. All profiling results point to an unbounded growth of String objects held by instances of System.Runtime.Remoting.ServerIdentity. Every String value is different, but they all look similar to: /dd41c00e_1566_4702_b660_c81cdea18a43/vigefresi5pfv8n0ekddg57z_1154.rem There is nothing in my application that uses ServerIdentity directly, and unless I am mistaken, the ServerIdentity instances are proportional to the number of incoming requests. If this is an internal .NET structure, it looks like the CLR is not cleaning up after itself. What could be causing the leak? UPDATE A little less than half of the String objects are being held by System.Runtime.Remoting. The remaining String objects are being held by System.Runtime.Serialization and look similar to: +1sgess5rjcrgbmp3kqr6bmv_3474.rem Also, the problem only seems to occur when lots of simultaneous HTTP web requests arrive.

    Read the article

  • JavaScript keeps returning ambigious error (in ASP.NET MVC 2.0)

    - by Erx_VB.NExT.Coder
    this is my function (with other lines ive tried/abandoned)... function DoClicked(eNumber) { //obj.style = 'bgcolor: maroon'; var eid = 'cat' + eNumber; //$get(obj).style.backgroundColor = 'maroon'; //var nObj = $get(obj); var nObj = document.getElementById(eid) //alert(nObj.getAttribute("style")); nObj.style.backgroundColor = 'Maroon'; alert(nObj.style.backgroundColor); //nObj.setAttribute("style", "backgroundcolor: Maroon"); }; This error keeps getting returned even after the last line in the function runs: Microsoft JScript runtime error: Sys.ArgumentUndefinedException: Value cannot be undefined. Parameter name: method this function is called with an "OnSuccess" set in my Ajax.ActionLink call (ASP.NET MVC)... anyone any ideas on this? i have these referenced... even when i remove the 'debug' versions for normal versions, i still get an error but the error just has much less information and says 'b' is undefined (probably a ms js library internal variable)... <script src="../../Scripts/MicrosoftAjax.debug.js" type="text/javascript"></script> <script src="../../Scripts/MicrosoftMvcValidation.debug.js" type="text/javascript"></script> <script src="../../Scripts/MicrosoftMvcAjax.debug.js" type="text/javascript"></script> <script src="../../Scripts/jquery-1.4.1.js" type="text/javascript"></script> also, this is how i am calling the actionlink method: Ajax.ActionLink(item.CategoryName, "SubCategoryList", "Home", New With {.CategoryID = item.CategoryID}, New AjaxOptions With {.UpdateTargetId = "SubCat", .HttpMethod = "Post", .OnSuccess = "DoClicked(" & item.CategoryID.ToString & ")"}, New With {.id = "cat" & item.CategoryID.ToString})

    Read the article

  • C# File IO with Streams - Best Memory Buffer Size

    - by AJ
    Hi, I am writing a small IO library to assist with a larger (hobby) project. A part of this library performs various functions on a file, which is read / written via the FileStream object. On each StreamReader.Read(...) pass, I fire off an event which will be used in the main app to display progress information. The processing that goes on in the loop is vaired, but is not too time consuming (it could just be a simple file copy, for example, or may involve encryption...). My main question is: What is the best memory buffer size to use? Thinking about physical disk layouts, I could pick 2k, which would cover a CD sector size and is a nice multiple of a 512 byte hard disk sector. Higher up the abstraction tree, you could go for a larger buffer which could read an entire FAT cluster at a time. I realise with today's PC's, I could go for a more memory hungry option (a couple of MiB, for example), but then I increase the time between UI updates and the user perceives a less responsive app. As an aside, I'm eventually hoping to provide a similar interface to files hosted on FTP / HTTP servers (over a local network / fastish DSL). What would be the best memory buffer size for those (again, a "best-case" tradeoff between perceived responsiveness vs. performance). Thanks in advance for any ideas, Adam

    Read the article

< Previous Page | 229 230 231 232 233 234 235 236 237 238 239 240  | Next Page >