Search Results

Search found 4835 results on 194 pages for 'practice'.

Page 186/194 | < Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >

  • Windows 7 Machine Makes Router Drop -All- Wireless Connections [closed]

    - by Hammer Bro.
    Note: I accidentally originally posted this question over at SuperUser, and I still think the issue is caused by some low-level networking practice of Windows 7, but I think the expertise here would be more apt to figuring it out. Apologies for the cross-post. Some background: My home network consists of my Desktop, a two-month old Windows 7 (x64) machine which is online most frequently (N-spec), as well as three other Windows XP laptops (all G) that only connect every now and then (one for work, one for Netflix, and the other for infrequent regular laptop uses). I used to have a Belkin F5D8236-4 wireless router, and everything worked great. A week ago, however, I found out that the Belkin absolutely in no way would establish a VPN connection, something that has become important for work. So I bought a Netgear WNR3500v2/U/L. The wireless was acting a little sketchy at first for just the Windows 7 machine, but I thought it had something to do with 802.11N and I was in a hurry so I just fished up an ethernet cable and disabled the computer's wireless. It has now become apparent, though, that whenever the Windows 7 machine is connected to the router, all wireless connections become unstable. I was using my work laptop for a solid six hours today with no trouble, having multiple SSH connections open over VPN and streaming internet radio in the background. Then, within two minutes of turning on this Windows 7 box, I had lost all connectivity over the wireless. And I was two feet away from the router. The same sort of thing happens on all of the other laptops -- Netflix can be playing stuff all weekend, but if I come up here and do things on this (W7) computer, the streaming will be dead within ten minutes. So here are my basic observations: If the Windows 7 machine is off, then all connections will have a Signal Strength of Very Good or Excellent and a Speed of 48-54 Mbps for an indefinite amount of time. Shortly after the Windows 7 machine is turned on, all wireless connections will experience a consistent decline in Speed down to 1.0 Mbps, eventually losing their connection entirely. These machines will continue to maintain 70% signal strength, as observed by themselves and router. Once dropped, a wireless connection will have difficulty reconnecting. And, if a connection manages to become established, it will quickly drop off again. The Windows 7 machine itself will continue to function just fine if it's using a wired connection, although it will experience these same issues over the wireless. All of the drivers and firmwares are up to date, and this happened both with the stock Netgear firmware as well as the (current) DD-WRT. What I've tried: Making sure each computer is being assigned a distinct IP. (They are.) Disabling UPnP and Stateful Packet Inspection on the router. Disabling Network Sharing, SSDP Discovery, TCP/IP NetBios Helper and Computer Browser services on the Windows 7 machine. Disabling QoS Packet Scheduler, IPv6, and Link Layer Topology Discovery options on my ethernet controller (leaving only Client for Microsoft Networks, File and Printer Sharing, and IPv4 enabled). What I think: It seems awfully similar to the problems discussed in detail at http://social.msdn.microsoft.com/Forums/en/wsk/thread/1064e397-9d9b-4ae2-bc8e-c8798e591915 (which was both the most relevant and concrete information I could dig up on the internet). I still think that something the Windows 7 IP stack (or just Operating System itself) is doing is giving the router fits. However, I could be wrong, because I have two key differences. One is that most instances of this problem are reported as the entire router dying or restarting, and mine still works just fine over the wired connection. The other is that it's a new router, tested with both the factory firmware and the (I assume) well-maintained DD-WRT project. Even if Windows 7 is still secretly sending IPv6 packets or the TCP Window Scaling implementation that I hear Vista caused some trouble with (even though I've tried my best to disable anything fancy), this router should support those functions. I don't want to get a new or a replacement router unless someone can convince me that this is a defective unit. But the problem seems too specific and predictable by my instincts to be a hardware hiccup. And I don't want to deal with the inevitable problems that always seem to take half a day to resolve when getting a new router, since I'm frantically working (including tomorrow) to complete a project by next week's deadline. Plus, I think in the worst case scenario, I could keep this router connected directly to the modem, disable its wireless entirely, and connect the old Belkin to it directly. That should allow me to still use VPN (although I'll have to plug my work laptop directly into that router), and then maintain wireless connections for all of the other computers. But that feels so wrong to me. Anyone have any ideas what the cause and possible solution could be? Clarifications: The Windows 7 machine is directly connected via an ethernet cable to the router for everything above. But while it is online, all other computers' wireless connections become unusable. It is not an issue of signal strength or interference -- no other devices within scanning range are using Channel 1, and the problem will affect computers that are literally feet away from the router with 95% signal strength.

    Read the article

  • How to move child element from one parent to another using jQuery

    - by Kapslok
    I am using the jQuery DataTables plugin. I would like to move the search box (.dataTables_filter) and number of records to display dropdown (.dataTables_length) from their parent element (.dataTables_wrapper) to another div on my page without losing any registered javascript behavior. For instance the search box has a function attached to the 'keyup' event and I want to keep that intact. The DOM looks like this: <body> <div id="parent1"> <div class="dataTables_wrapper" id="table1_wrapper"> <div class="dataTables_length" id="table1_length"> <select size="1" name="table1_length"> <option value="10">10</option> <option value="25">25</option> <option value="50">50</option> <option value="100">100</option> </select> </div> <div class="dataTables_filter" id="table1_filter"> <input type="text" class="search"> </div> <table id="table1"> ... </table> </div> </div> <div id="parent2"> <ul> <li><a href="#">Link A</a></li> <li><a href="#">Link B</a></li> <li><a href="#">Link C</a></li> </ul> </div> </body> This is what I would like the DOM to look like after the move: <body> <div id="parent1"> <div class="dataTables_wrapper" id="table1_wrapper"> <table id="table1"> ... </table> </div> </div> <div id="parent2"> <div class="dataTables_filter" id="table1_filter"> <input type="text" class="search"> </div> <div class="dataTables_length" id="table1_length"> <select size="1" name="table1_length"> <option value="10">10</option> <option value="25">25</option> <option value="50">50</option> <option value="100">100</option> </select> </div> <ul> <li><a href="#">Link A</a></li> <li><a href="#">Link B</a></li> <li><a href="#">Link C</a></li> </ul> </div> </body> I've been looking at the .append(), .appendTo(), .prepend() and .prependTo() functions but haven't had any luck with these in practice. I've also looked at the .parent() and .parents() functions, but can't seem to code a workable solution. I have also considered changing the CSS so that the elements are absolutely positioned - but to be frank the page is setup with fluid elements all over, and I really want these elements to be floated in their new parents. Any help with this is much appreciated.

    Read the article

  • Problems with multiple setIntervals running simultaniously

    - by Roel V.
    Hello, My first post here. I want to make a horizontal menu with submenu's sliding down on mouseover. I know I could use jQuery but this is to practice my javascript skills. I use the following code: var up = new Array() var down = new Array() var submenustart function titleover(headmenu, inter) { submenu = headmenu.lastChild up[inter] = window.clearInterval(up[inter]) down[inter] = window.setInterval("slidedown(submenu)",1) } function slidedown(submenu) { if(submenu.offsetTop < submenustart) { submenu.style.top = submenu.offsetTop + 1 + "px" } } function titleout(headmenu, inter) { submenu = headmenu.lastChild down[inter] = window.clearInterval(down[inter]) up[inter] = window.setInterval("slideup(submenu)", 1) } function slideup(submenu) { if(submenu.offsetTop > submenustart - submenu.clientHeight + 1) { submenu.style.top = submenu.offsetTop - 1 + "px" } } The variable submenustart gets appointed a value in another function which is not relevant for my question. HTML looks like this: <table class="hoofding" id="hoofding"> <tr> <td onmouseover="titleover(this, 0)" onmouseout="titleout(this, 0)"><a href="#" class="hoofdinglink" id="hoofd1">AAAA</a> <table class="menu"> <tr><td><a href="...">1111</a></td></tr> <tr><td><a href="...">2222</a></td></tr> <tr><td><a href="...">3333</a></td></tr> </table></td> <td onmouseover="titleover(this, 1)" onmouseout="titleout(this, 1)"><a href="#" class="hoofdinglink">BBBB</a> <table class="menu"> <tr><td><a href="...">1111</a></td></tr> <tr><td><a href="...">2222</a></td></tr> <tr><td><a href="...">3333</a></td></tr> <tr><td><a href="...">4444</a></td></tr> <tr><td><a href="...">5555</a></td></tr> </table></td> ... </tr> </table> What happens is the following: If I go over and out (for ex) menu A it works fine. If i go now over menu B the interval applied to A is now applied to B. There are now 2 interval functions applied to B. The one originally for A and a new one triggered by the mouseover on B. If I would go to A all the intervals are now applied to A. I have been searching for hours but and I am completely stuck. Thanks in advance.

    Read the article

  • How to sanely configure security policy in Tomcat 6

    - by Chas Emerick
    I'm using Tomcat 6.0.24, as packaged for Ubuntu Karmic. The default security policy of Ubuntu's Tomcat package is pretty stringent, but appears straightforward. In /var/lib/tomcat6/conf/policy.d, there are a variety of files that establish default policy. Worth noting at the start: I've not changed the stock tomcat install at all -- no new jars into its common lib directory(ies), no server.xml changes, etc. Putting the .war file in the webapps directory is the only deployment action. the web application I'm deploying fails with thousands of access denials under this default policy (as reported to the log thanks to the -Djava.security.debug="access,stack,failure" system property). turning off the security manager entirely results in no errors whatsoever, and proper app functionality What I'd like to do is add an application-specific security policy file to the policy.d directory, which seems to be the recommended practice. I added this to policy.d/100myapp.policy (as a starting point -- I would like to eventually trim back the granted permissions to only what the app actually needs): grant codeBase "file:${catalina.base}/webapps/ROOT.war" { permission java.security.AllPermission; }; grant codeBase "file:${catalina.base}/webapps/ROOT/-" { permission java.security.AllPermission; }; grant codeBase "file:${catalina.base}/webapps/ROOT/WEB-INF/-" { permission java.security.AllPermission; }; grant codeBase "file:${catalina.base}/webapps/ROOT/WEB-INF/lib/-" { permission java.security.AllPermission; }; grant codeBase "file:${catalina.base}/webapps/ROOT/WEB-INF/classes/-" { permission java.security.AllPermission; }; Note the thrashing around attempting to find the right codeBase declaration. I think that's likely my fundamental problem. Anyway, the above (really only the first two grants appear to have any effect) almost works: the thousands of access denials are gone, and I'm left with just one. Relevant stack trace: java.security.AccessControlException: access denied (java.io.FilePermission /var/lib/tomcat6/webapps/ROOT/WEB-INF/classes/com/foo/some-file-here.txt read) java.security.AccessControlContext.checkPermission(AccessControlContext.java:323) java.security.AccessController.checkPermission(AccessController.java:546) java.lang.SecurityManager.checkPermission(SecurityManager.java:532) java.lang.SecurityManager.checkRead(SecurityManager.java:871) java.io.File.exists(File.java:731) org.apache.naming.resources.FileDirContext.file(FileDirContext.java:785) org.apache.naming.resources.FileDirContext.lookup(FileDirContext.java:206) org.apache.naming.resources.ProxyDirContext.lookup(ProxyDirContext.java:299) org.apache.catalina.loader.WebappClassLoader.findResourceInternal(WebappClassLoader.java:1937) org.apache.catalina.loader.WebappClassLoader.findResource(WebappClassLoader.java:973) org.apache.catalina.loader.WebappClassLoader.getResource(WebappClassLoader.java:1108) java.lang.ClassLoader.getResource(ClassLoader.java:973) I'm pretty convinced that the actual file that's triggering the denial is irrelevant -- it's just some properties file that we check for optional configuration parameters. What's interesting is that: it doesn't exist in this context the fact that the file doesn't exist ends up throwing a security exception, rather than java.io.File.exists() simply returning false (although I suppose that's just a matter of the semantics of the read permission). Another workaround (besides just disabling the security manager in tomcat) is to add an open-ended permission to my policy file: grant { permission java.security.AllPermission; }; I presume this is functionally equivalent to turning off the security manager. I suppose I must be getting the codeBase declaration in my grants subtly wrong, but I'm not seeing it at the moment.

    Read the article

  • Sending emails in web applications

    - by Denise
    Hi everyone, I'm looking for some opinions here, I'm building a web application which has the fairly standard functionality of: Register for an account by filling out a form and submitting it. Receive an email with a confirmation code link Click the link to confirm the new account and log in When you send emails from your web application, it's often (usually) the case that there will be some change to the persistence layer. For example: A new user registers for an account on your site - the new user is created in the database and an email is sent to them with a confirmation link A user assigns a bug or issue to someone else - the issue is updated and email notifications are sent. How you send these emails can be critical to the success of your application. How you send them depends on how important it is that the intended recipient receives the email. We'll look at the following four strategies in relation to the case where the mail server is down, using example 1. TRANSACTIONAL & SYNCHRONOUS The sending of the email fails and the user is shown an error message saying that their account could not be created. The application will appear to be slow and unresponsive as the application waits for the connection timeout. The account is not created in the database because the transaction is rolled back. TRANSACTIONAL & ASYNCHRONOUS The transactional definition here refers to sending the email to a JMS queue or saving it in a database table for another background process to pick up and send. The user account is created in the database, the email is sent to a JMS queue for processing later. The transaction is successful and committed. The user is shown a message saying that their account was created and to check their email for a confirmation link. It's possible in this case that the email is never sent due to some other error, however the user is told that the email has been sent to them. There may be some delay in getting the email sent to the user if application support has to be called in to diagnose the email problem. NON-TRANSACTIONAL & SYNCHRONOUS The user is created in the database, but the application gets a timeout error when it tries to send the email with the confirmation link. The user is shown an error message saying that there was an error. The application is slow and unresponsive as it waits for the connection timeout When the mail server comes back to life and the user tries to register again, they are told their account already exists but has not been confirmed and are given the option of having the email re-sent to them. NON-TRANSACTIONAL & ASYNCHRONOUS The only difference between this and transactional & asynchronous is that if there is an error sending the email to the JMS queue or saving it in the database, the user account is still created but the email is never sent until the user attempts to register again. What I'd like to know is what have other people done here? Can you recommend any other solutions other than the 4 I've mentioned above? What's a reasonable way of approaching this problem? I don't want to over-engineer a system that's dealing with the (hopefully) rare situation where my mail server goes down! The simplest thing to do is to code it synchronously, but are there any other pitfalls to this approach? I guess I'm wondering if there's a best practice, I couldn't find much out there by googling.

    Read the article

  • php script dies when it calls a bash script, maybe a problem of server configuration

    - by user347501
    Hi!!! I have some problems with a PHP script that calls a Bash script.... in the PHP script is uploaded a XML file, then the PHP script calls a Bash script that cut the file in portions (for example, is uploaded a XML file of 30,000 lines, so the Bash script cut the file in portions of 10,000 lines, so it will be 3 files of 10,000 each one) The file is uploaded, the Bash script cut the lines, but when the Bash script returns to the PHP script, the PHP script dies, & I dont know why... I tested the script in another server and it works fine... I dont believe that is a memory problem, maybe it is a processor problem, I dont know, I dont know what to do, what can I do??? (Im using the function shell_exec in PHP to call the Bash script) The error only happens if the XML file has more than 8,000 lines, but if the file has less then 8,000 everything is ok (this is relative, it depends of the amount of data, of strings, of letters that contains each line) what can you suggest me??? (sorry for my bad english, I have to practice a lot xD) I leave the code here PHP script (at the end, after the ?, there is html & javascript code, but it doesnt appear, only the javascript code... basically the html is only to upload the file) " . date('c') . ": $str"; $file = fopen("uploadxmltest.debug.txt","a"); fwrite($file,date('c') . ": $str\n"); fclose($file); } try{ if(is_uploaded_file($_FILES['tfile']['tmp_name'])){ debug("step 1: the file was uploaded"); $norg=date('y-m-d')."_".md5(microtime()); $nfle="testfiles/$norg.xml"; $ndir="testfiles/$norg"; $ndir2="testfiles/$norg"; if(move_uploaded_file($_FILES['tfile']['tmp_name'],"$nfle")){ debug("step 2: the file was moved to the directory"); debug("memory_get_usage(): " . memory_get_usage()); debug("memory_get_usage(true): " . memory_get_usage(true)); debug("memory_get_peak_usage(): " . memory_get_peak_usage()); debug("memory_get_peak_usage(true): " . memory_get_peak_usage(true)); $shll=shell_exec("./crm_cutfile_v2.sh \"$nfle\" \"$ndir\" \"$norg\" "); debug("result: $shll"); debug("memory_get_usage(): " . memory_get_usage()); debug("memory_get_usage(true): " . memory_get_usage(true)); debug("memory_get_peak_usage(): " . memory_get_peak_usage()); debug("memory_get_peak_usage(true): " . memory_get_peak_usage(true)); debug("step 3: the file was cutted. END"); } else{ debug("ERROR: I didnt move the file"); exit(); } } else{ debug("ERROR: I didnt upload the file"); //exit(); } } catch(Exception $e){ debug("Exception: " . $e-getMessage()); exit(); } ? Test function uploadFile(){ alert("start"); if(document.test.tfile.value==""){ alert("First you have to upload a file"); } else{ document.test.submit(); } } Bash script with AWK #!/bin/bash #For single messages (one message per contact) function cutfile(){ lines=$( cat "$1" | awk 'END {print NR}' ) fline="$4"; if [ -d "$2" ]; then exsts=1 else mkdir "$2" fi cp "$1" "$2/datasource.xml" cd "$2" i=1 contfile=1 while [ $i -le $lines ] do currentline=$( cat "datasource.xml" | awk -v fl=$i 'NR==fl {print $0}' ) #creates first file if [ $i -eq 1 ]; then echo "$fline" "$3_1.txt" else #creates the rest of files when there are more than 10,000 contacts rsd=$(( ( $i - 2 ) % 10000 )) if [ $rsd -eq 0 ]; then echo "" "$3_$contfile.txt" contfile=$(( $contfile + 1 )) echo "$fline" "$3_$contfile.txt" fi fi echo "$currentline" "$3_$contfile.txt" i=$(( $i + 1 )) done echo "" "$3_$contfile.txt" return 1 } #For multiple messages (one message for all contacts) function cutfile_multi(){ return 1 } cutfile "$1" "$2" "$3" "$4" echo 1 thanks!!!!! =D

    Read the article

  • Two UIViews, one UIViewController (in one UINavigationController)

    - by jdandrea
    Given an iPhone app with a UITableViewController pushed onto a UINavigationController, I would like to add a right bar button item to toggle between the table view and an "alternate" view of the same data. Let's also say that this other view uses the same data but is not a UITableView. Now, I know variations on this question already exist on Stack Overflow. However, in this case, that alternate view would not be pushed onto the UINavigationController. It would be visually akin to flipping the current UIViewController's table view over and revealing the other view, then being able to flip back. In other words, it's intended to take up a single spot in the UINavigationController hierarchy. Moreover, whatever selection you ultimately make from within either view will push a common UIViewController onto the UINavigationController stack. Still more info: We don't want to use a separate UINavigationController just to handle this pair of views, and we don't want to split these apart via a UITabBarController either. Visually and contextually, the UX is meant to show two sides of the same coin. It's just that those two sides happen to involve their own View Controllers in normal practice. Now … it turns out I have already gone and quickly set this up to see how it might work! However, upon stepping back to examine it, I get the distinct impression that I went about it in a rather non-MVC way, which of course concerns me a bit. Here's what I did at a high level. Right now, I have a UIViewController (not a UITableViewController) that handles all commonalities between the two views, such as fetching the raw data. I also have two NIBs, one for each view, and two UIView objects to go along with them. (One of them is a UITableView, which is a kind of UIView.) I switch between the views using animation (easy enough). Also, in an effort to keep things encapsulated, the now-split-apart UITableView (not the UIViewController!) acts as its own delegate and data source, fetching data from the VC. The VC is set up as a weak, non-retained object in the table view. In parallel, the alternate view gets at the raw data from the VC in the exact same way. So, there are a few things that smell funny here. The weak linking from child to parent, while polite, seems like it might be wrong. Making a UITableView the table's data source and delegate also seems odd to me, thinking that a view controller is where you want to put that per Apple's MVC diagrams. As it stands now, it would appear as if the view knows about the model, which isn't good. Loading up both views in advance also seems odd, because lazy loading is no longer in effect. Losing the benefits of a UITableViewController (like auto-scrolling to cells with text fields) is also a bit frustrating, and I'd rather not reinvent the wheel to work around that as well. Given all of the above, and given we want that "flip effect" in the context of a single spot on a single UINavigationController, and given that both views are two sides of the same coin, is there a better, more obvious way to design this that I'm just happening to miss completely? Clues appreciated!

    Read the article

  • Weighted round robins via TTL - possible?

    - by Joe Hopfgartner
    I currently use DNS round robin for load balancing, which works great. The records look like this (I have a ttl of 120 seconds) ;; ANSWER SECTION: orion.2x.to. 116 IN A 80.237.201.41 orion.2x.to. 116 IN A 87.230.54.12 orion.2x.to. 116 IN A 87.230.100.10 orion.2x.to. 116 IN A 87.230.51.65 I learned that not every ISP / device treats such a response the same way. For example some DNS servers rotate the addresses randomly or always cycle them through. Some just propagate the first entry, others try to determine which is best (regionally near) by looking at the ip address. However if the userbase is big enough (spreads over multiple ISPs etc) it balances pretty well. The discrepancies from highest to lowest loaded server hardly every exceeds 15%. However now I have the problem that I am introducing more servers into the systems, that not all have the same capacities. I currently only have 1gbps servers, but I want to work with 100mbit and also 10gbps servers too. So what I want is I want to introduce a server with 10 GBps with a weight of 100, a 1 gbps server with a weight of 10 and a 100 mbit server with a weight of 1. I used to add servers twice to bring more traffic to them (which worked nice. the bandwidth doubled almost.) But adding a 10gbit server 100 times to DNS is a bit rediculous. So I thought about using the TTL. If I give server A 240 seconds ttl and server B only 120 seconds (which is about about the minimum to use for round robin, as a lot of dns servers set to 120 if a lower ttl is specified.. so i have heard) I think something like this should occour in an ideal scenario: first 120 seconds 50% of requests get server A -> keep it for 240 seconds. 50% of requests get server B -> keep it for 120 seconds second 120 seconds 50% of requests still have server A cached -> keep it for another 120 seconds. 25% of requests get server A -> keep it for 240 seconds 25% of requests get server B -> keep it for 120 seconds third 120 seconds 25% will get server A (from the 50% of Server A that now expired) -> cache 240 sec 25% will get server B (from the 50% of Server A that now expired) -> cache 120 sec 25% will have server A cached for another 120 seconds 12.5% will get server B (from the 25% of server B that now expired) -> cache 120sec 12.5% will get server A (from the 25% of server B that now expired) -> cache 240 sec fourth 120 seconds 25% will have server A cached -> cache for another 120 secs 12.5% will get server A (from the 25% of b that now expired) -> cache 240 secs 12.5% will get server B (from the 25% of b that now expired) -> cache 120 secs 12.5% will get server A (from the 25% of a that now expired) -> cache 240 secs 12.5% will get server B (from the 25% of a that now expired) -> cache 120 secs 6.25% will get server A (from the 12.5% of b that now expired) -> cache 240 secs 6.25% will get server B (from the 12.5% of b that now expired) -> cache 120 secs 12.5% will have server A cached -> cache another 120 secs ... i think i lost something at this point but i think you get the idea.... As you can see this gets pretty complicated to predict and it will for sure not work out like this in practice. But it should definitely have an effect on the distribution! I know that weighted round robin exists and is just controlled by the root server. It just cycles through dns records when responding and returns dns records with a set propability that corresponds to the weighting. My DNS server does not support this, and my requirements are not that precise. If it doesnt weight perfectly its okay, but it should go into the right direction. I think using the TTL field could be a more elegant and easier solution - and it deosnt require a dns server that controls this dynamically, which saves resources - which is in my opinion the whole point of dns load balancing vs hardware load balancers. My question now is... are there any best prectices / methos / rules of thumb to weight round robin distribution using the TTL attribute of DNS records? Edit: The system is a forward proxy server system. The amount of Bandwidth (not requests) exceeds what one single server with ethernet can handle. So I need a balancing solution that distributes the bandwidth to several servers. Are there any alternative methods than using DNS? Of course I can use a load balancer with fibre channel etc, but the costs are rediciulous and it also increases only the width of the bottleneck and does not eliminate it. The only thing i can think of are anycast (is it anycast or multicast?) ip addresses, but I don't have the means to set up such a system.

    Read the article

  • is it right to call ejb bean from thread by ThreadPoolExecutor?

    - by kislo_metal
    I trying to call some ejb bean method from tread. and getting error : (as is glassfish v3) Log Level SEVERE Logger javax.enterprise.system.std.com.sun.enterprise.v3.services.impl Name-Value Pairs {_ThreadName=Thread-1, _ThreadID=42} Record Number 928 Message ID java.lang.NullPointerException at ua.co.rufous.server.broker.TempLicService.run(TempLicService.java Complete Message 35) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:637) here is tread public class TempLicService implements Runnable { String hash; //it`s Stateful bean @EJB private LicActivatorLocal lActivator; public TempLicService(String hash) { this.hash= hash; } @Override public void run() { lActivator.proccessActivation(hash); } } my ThreadPoolExecutor public class RequestThreadPoolExecutor extends ThreadPoolExecutor { private boolean isPaused; private ReentrantLock pauseLock = new ReentrantLock(); private Condition unpaused = pauseLock.newCondition(); private static RequestThreadPoolExecutor threadPool; private RequestThreadPoolExecutor() { super(1, Integer.MAX_VALUE, 10, TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>()); System.out.println("RequestThreadPoolExecutor created"); } public static RequestThreadPoolExecutor getInstance() { if (threadPool == null) threadPool = new RequestThreadPoolExecutor(); return threadPool; } public void runService(Runnable task) { threadPool.execute(task); } protected void beforeExecute(Thread t, Runnable r) { super.beforeExecute(t, r); pauseLock.lock(); try { while (isPaused) unpaused.await(); } catch (InterruptedException ie) { t.interrupt(); } finally { pauseLock.unlock(); } } public void pause() { pauseLock.lock(); try { isPaused = true; } finally { pauseLock.unlock(); } } public void resume() { pauseLock.lock(); try { isPaused = false; unpaused.signalAll(); } finally { pauseLock.unlock(); } } public void shutDown() { threadPool.shutdown(); } //<<<<<< creating thread here public void runByHash(String hash) { Runnable service = new TempLicService(hash); threadPool.runService(service); } } and method where i call it (it is gwt servlet, but there is no proble to call thread that not contain ejb) : @Override public Boolean submitHash(String hash) { System.out.println("submiting hash"); try { if (tBoxService.getTempLicStatus(hash) == 1) { //<<< here is the call RequestThreadPoolExecutor.getInstance().runByHash(hash); return true; } } catch (NoResultException e) { e.printStackTrace(); } return false; } I need to organize some pool of submitting hash to server (calls of LicActivator bean), is ThreadPoolExecutor design good idea and why it is not working in my case? (as I know we can`t create thread inside bean, but could we call bean from different threads? ). If No, what is the bast practice for organize such request pool? Thanks. << Answer: I am using DI (EJB 3.1) soo i do not need any look up here. (application packed in ear and both modules in it (web module and ejb), it works perfect for me). But I can use it only in managed classes. So.. 2.Can I use manual look up in Tread ? Could I use Bean that extends ThreadPoolExecutor and calling another bean that implements Runnable ? Or it is not allowed ?

    Read the article

  • Help with this code.

    - by karthick6891
    Heading ##Hey guys i need help with this code.The short version is,i have to do match the follwing by using drag and drop in jquery and later i need to show a message whether it is right or wrong. var output = "Wrong"; var old_item ; var new_item ; var newObjArray=[]; $(function() { var $gallery = $('.gallery'), $trash = $('.trash'); $('div',$gallery).draggable({ revert: 'invalid', containment: 'document', helper: 'clone', cursor: 'move', }); // let the trash be droppable, accepting the gallery items $trash.droppable({ //accept: '#gallery div', //activeClass: 'ui-state-highlight', tolerance: 'touch', drop: function(ev, ui) { new_item = ui.draggable; $(this).droppable("option","activeClass",'.ui-state-highlight'); if(contains(newObjArray,ui.draggable)) { newObjArray.pop(ui.draggable); } newObjArray.push(ui.draggable); deleteImage(ui.draggable,$(this)); } }); // let the gallery be droppable as well, accepting items from the trash $gallery.droppable({ //accept: '#trash li', activeClass: 'custom-state-active', drop: function(ev, ui) { recycleImage(ui.draggable); } }); // image deletion function function deleteImage($item,element) { var $list; $item.fadeOut(10,function() { if($(".ui-widget-content", element).length <1){ old_item = $item; $list = $('<div class="gallery ui-helper-reset"/>').appendTo(element); //$('ul',$trash).length ? $('ul',$trash) : $('<ul class="gallery ui-helper-reset"/>').appendTo($trash); $item.appendTo($list).fadeIn(10); } else{ recycleImage($(".ui-widget-content",element)); $list = $('<div class="gallery ui-helper-reset"/>').appendTo(element); $item.appendTo($list).fadeIn(10); old_item = $item; } }); } //check for given answer // image recycle function function recycleImage($item) { $item.fadeOut(10,function() { $item.find("img").end().appendTo($gallery).fadeIn(10); }); } // image preview function, demonstrating the ui.dialog used as a modal window }); function checkOrder(){ if(newObjArray==null){ } else if(newObjArray!=null){ if(newObjArray[0].attr("id")=="5"&& newObjArray[1].attr("id")=="1" && newObjArray[2].attr("id")=="3" && newObjArray[3].attr("id")=="2" && newObjArray[4].attr("id")=="4"){ parent.parent.increaseCorrectA(); } else{ parent.parent.increaseWrongA(); } } } function contains(a, obj) { var i = a.length; while (i--) { if (a[i] === obj) { return true; } } return false; } I am calling the function checkOrder() from the html page after the matching of items is over.What happens here is i have just stored the user responses on the draggable over droppable in an array,based on it's position.It is a bad practice cause,i am saying whether it's right or wrong through its position in the array.The only thing i can do is get the draggable's id present in the droppable.But i dont know how to get that inside the function checkOrder().Any ideas please?

    Read the article

  • Use QuickCheck by generating primes

    - by Dan
    Background For fun, I'm trying to write a property for quick-check that can test the basic idea behind cryptography with RSA. Choose two distinct primes, p and q. Let N = p*q e is some number relatively prime to (p-1)(q-1) (in practice, e is usually 3 for fast encoding) d is the modular inverse of e modulo (p-1)(q-1) For all x such that 1 < x < N, it is always true that (x^e)^d = x modulo N In other words, x is the "message", raising it to the eth power mod N is the act of "encoding" the message, and raising the encoded message to the dth power mod N is the act of "decoding" it. (The property is also trivially true for x = 1, a case which is its own encryption) Code Here are the methods I have coded up so far: import Test.QuickCheck -- modular exponentiation modExp :: Integral a => a -> a -> a -> a modExp y z n = modExp' (y `mod` n) z `mod` n where modExp' y z | z == 0 = 1 | even z = modExp (y*y) (z `div` 2) n | odd z = (modExp (y*y) (z `div` 2) n) * y -- relatively prime rPrime :: Integral a => a -> a -> Bool rPrime a b = gcd a b == 1 -- multiplicative inverse (modular) mInverse :: Integral a => a -> a -> a mInverse 1 _ = 1 mInverse x y = (n * y + 1) `div` x where n = x - mInverse (y `mod` x) x -- just a quick way to test for primality n `divides` x = x `mod` n == 0 primes = 2:filter isPrime [3..] isPrime x = null . filter (`divides` x) $ takeWhile (\y -> y*y <= x) primes -- the property prop_rsa (p,q,x) = isPrime p && isPrime q && p /= q && x > 1 && x < n && rPrime e t ==> x == (x `powModN` e) `powModN` d where e = 3 n = p*q t = (p-1)*(q-1) d = mInverse e t a `powModN` b = modExp a b n (Thanks, google and random blog, for the implementation of modular multiplicative inverse) Question The problem should be obvious: there are way too many conditions on the property to make it at all usable. Trying to invoke quickCheck prop_rsa in ghci made my terminal hang. So I've poked around the QuickCheck manual a bit, and it says: Properties may take the form forAll <generator> $ \<pattern> -> <property> How do I make a <generator> for prime numbers? Or with the other constraints, so that quickCheck doesn't have to sift through a bunch of failed conditions? Any other general advice (especially regarding QuickCheck) is welcome.

    Read the article

  • Log a user in to an ASP.net application using Windows Authentication without using Windows Authentic

    - by Rising Star
    I have an ASP.net application I'm developing authentication for. I am using an existing cookie-based log on system to log users in to the system. The application runs as an anonymous account and then checks the cookie when the user wants to do something restricted. This is working fine. However, there is one caveat: I've been told that for each page that connects to our SQL server, I need to make it so that the user connects using an Active Directory account. because the system I'm using is cookie based, the user isn't logged in to Active Directory. Therefore, I use impersonation to connect to the server as a specific account. However, the powers that be here don't like impersonation; they say that it clutters up the code. I agree, but I've found no way around this. It seems that the only way that a user can be logged in to an ASP.net application is by either connecting with Internet Explorer from a machine where the user is logged in with their Active Directory account or by typing an Active Directory username and password. Neither of these two are workable in my application. I think it would be nice if I could make it so that when a user logs in and receives the cookie (which actually comes from a separate log on application, by the way), there could be some code run which tells the application to perform all network operations as the user's Active Directory account, just as if they had typed an Active Directory username and password. It seems like this ought to be possible somehow, but the solution evades me. How can I make this work? Update To those who have responded so far, I apologize for the confusion I have caused. The responses I've received indicate that you've misunderstood the question, so please allow me to clarify. I have no control over the requirement that users must perform network operations (such as SQL queries) using Active Directory accounts. I've been told several times (online and in meat-space) that this is an unusual requirement and possibly bad practice. I also have no control over the requirement that users must log in using the existing cookie-based log on application. I understand that in an ideal MS ecosystem, I would simply dis-allow anonymous access in my IIS settings and users would log in using Windows Authentication. This is not the case. The current system is that as far as IIS is concerned, the user logs in anonymously (even though they supply credentials which result in the issuance of a cookie) and we must programmatically check the cookie to see if the user has access to any restricted resources. In times past, we have simply used a single SQL account to perform all queries. My direct supervisor (who has many years of experience with this sort of thing) wants to change this. He says that if each user has his own AD account to perform SQL queries, it gives us more of a trail to follow if someone tries to do something wrong. The closest thing I've managed to come up with is using WIF to give the user a claim to a specific Active Directory account, but I still have to use impersonation because even still, the ASP.net process presents anonymous credentials to the SQL server. It boils down to this: Can I log users in with Active Directory accounts in my ASP.net application without having the users manually enter their AD credentials? (Windows Authentication)

    Read the article

  • ublas::bounded_vector<> being resized?

    - by n2liquid
    Now, seriously... I'll refrain from using bad words here because we're talking about the Boost fellows. It MUST be my mistake to see things this way, but I can't understand why, so I'll ask it here; maybe someone can enlighten me in this matter. Here it goes: uBLAS has this nice class template called bounded_vector<> that's used to create fixed-size vectors (or so I thought). From the Effective uBLAS wiki (http://www.crystalclearsoftware.com/cgi-bin/boost_wiki/wiki.pl?Effective_UBLAS): The default uBLAS vector and matrix types are of variable size. Many linear algebra problems involve vectors with fixed size. 2 and 3 elements are common in geometry! Fixed size storage (akin to C arrays) can be implemented efficiently as it does not involve the overheads (heap management) associated with dynamic storage. uBLAS implements fixed sizes by changing the underling storage of a vector/matrix to a "bounded_array" from the default "unbounded_array". Alright, this bounded_vector<> thing is used to free you from specifying the underlying storage of the vector to a bounded_array<> of the specified size. Here I ask you: doesn't it look like this bounded vector thing has fixed size to you? Well, it doesn't have. At first I felt betrayed by the wiki, but then I reconsidered the meaning of "bounded" and I think I can let it pass. But in case you, like me (I'm still uncertain), is still wondering if this makes sense, what I found out is that the bounded_vector<> actually can be resized, it may only not be greater than the size specified as template parameter. So, first off, do you think they've had a good reason not to make a real fixed<< size vector or matrix type? Do you think it's okay to "sell" this bounded -- as opposed to fixed-size -- vector to the users of my library as a "fixed-size" vector replacement, even named "Vector3" or "Vector2", like the Effective uBLAS wiki did? Do you think I should somehow implement a vector with fixed size for this purpose? If so, how? (Sorry, but I'm really new to uBLAS; just tried it today) I am developing a 3D game. Should uBLAS be used for the calculations involved in this ("hey, geometry!", per Effective uBLAS wiki)? What replacement would you suggest, if not? -- edit And just in case, yes, I've read this warning: It should be noted that this only changes the storage uBLAS uses for the vector3. uBLAS will still use all the same algorithm (which assume a variable size) to manipulate the vector3. In practice this seems to have no negative impact on speed. The above runs just as quickly as a hand crafted vector3 which does not use uBLAS. The only negative impact is that the vector3 always store a "size" member which in this case is redundant [or isn't it? I mean......]. I see it uses the same algorithm, assuming a variable size, but if an operation were to actually change its size, shouldn't it be stopped (assertion)? ublas::bounded_vector<float,3> v3; ublas::bounded_vector<float,2> v2; v3 = v2; std::cout << v3.size() << '\n'; // prints 2 Oh, come on, isn't this just plain betrayal?

    Read the article

  • Where do I start ?

    - by Panthe
    Brief History: Just graduated high school, learned a bit of python and C++, have no friends with any helpful computer knowledge at all. Out of anyone i met in my school years I was probably the biggest nerd, but no one really knew. I consider my self to have a vast amount of knowledge on computers and tech then the average person. built/fixed tons of computers, and ability to troubleshoot pretty much any problem I came across. Now that high school is over, Ive really been thinking about my career. Loving, living computers for the past 15 years of my life I decided to take my ability's and try to learn computer programming, why I didn't start earlier I don't know, seems to be big mistake on my part... Doing some research I concluded that Python was the first programming language I should learn, since it was high level and easier to understand then C++ and Java. I also knew that to become good at what I did I needed to know more then just 2 or 3 languages, which didn't seem like a big problem considering once I learned the way Python worked, mainly syntax changed, and the rest would come naturally. I watched a couple of youtube videos, downloaded some book pdf's and snooped around from some tutorials here and there to get the hang of what to do. A two solid weeks had passed of trying to understand the syntax, create small programs that used the basic functions and understanding how it worked, I think i have got the hang of it. It breaks down into what ive been dealing with all this time (although i kinda knew) is that, input,output, loops, functions and other things derived from 0's and 1's storing data and recalling it, ect. (A VERY BASIC IDEA). Ive been able to create small programs, Hangman, file storing, temperature conversion, Caeser Cipher decode/encoding, Fibonacci Sequence and more, which i can create and understand how each work. Being 2 weeks into this, I have learned alot. Nothing at all compared to what i should be learning in the years to come if i get a grip on what I'm doing. While doing these programs I wont stop untill I've done doing a practice problem on a book, which embarresing enough will take me a couple hour depending on the complexity of it. I absolutly will not put aside the challenge until its complete, WHICH CAN BE EXTREMELY DRAINING, ive tried most problems without cheating and reached success, which makes me feel extremely proud of my self after completing something after much trial and error. After all this I have met the demon, alogrithm's which seem to be key to effiecent code. I cant seem to rap my head around some of the computer codes people put out there using numbers, and sometimes even basic functions, I have been able to understand them after a while but i know there are alot more complex things to come, considering my self smart, functions that require complex codes, actually hurt my brain. NOTHING EVER IN LIFE HURT MY BRAIN....... not even math classes in highschool, trying to understand some of the stuff people put out there makes me feel like i have a mental disadvantage lol... i still walk forward though, crossing my fingers that the understanding will come with time. Sorry if is this is long i just wish someone takes all these things into consideration when answering my question. even through all these downsides im still pushing through and continuing to try and get good at this, i know reading these tutorials wont make me any good unless i can become creative and make my own, understand other peoples programs, so this leads me to the simple question i could have asked in the beginning..... WHERE IN THE WORLD DO I START ? Ive been trying to find out how to understand some of the open source projects, how i can work with experianced coders to learn from them and help them, but i dont think thats even possible by the way how far people's knowledge is compared to me, i have no freinds who i can learn from, can someone help me and guide me into the right direction.. i have a huge motivation to get good at coding, anything information would be extremely helpful

    Read the article

  • HttpClient multithread performance

    - by pepper
    I have an application which downloads more than 4500 html pages from 62 target hosts using HttpClient (4.1.3 or 4.2-beta). It runs on Windows 7 64-bit. Processor - Core i7 2600K. Network bandwidth - 54 Mb/s. At this moment it uses such parameters: DefaultHttpClient and PoolingClientConnectionManager; Also it hasIdleConnectionMonitorThread from http://hc.apache.org/httpcomponents-client-ga/tutorial/html/connmgmt.html; Maximum total connections = 80; Default maximum connections per route = 5; For thread management it uses ForkJoinPool with the parallelism level = 5 (Do I understand correctly that it is a number of working threads?) In this case my network usage (in Windows task manager) does not rise above 2.5%. To download 4500 pages it takes 70 minutes. And in HttpClient logs I have such things: DEBUG ForkJoinPool-2-worker-1 [org.apache.http.impl.conn.PoolingClientConnectionManager]: Connection released: [id: 209][route: {}-http://stackoverflow.com][total kept alive: 6; route allocated: 1 of 5; total allocated: 10 of 80] Total allocated connections do not raise above 10-12, in spite of that I've set it up to 80 connections. If I'll try to rise parallelism level to 20 or 80, network usage remains the same but a lot connection time-outs will be generated. I've read tutorials on hc.apache.org (HttpClient Performance Optimization Guide and HttpClient Threading Guide) but they does not help. Task's code looks like this: public class ContentDownloader extends RecursiveAction { private final HttpClient httpClient; private final HttpContext context; private List<Entry> entries; public ContentDownloader(HttpClient httpClient, List<Entry> entries){ this.httpClient = httpClient; context = new BasicHttpContext(); this.entries = entries; } private void computeDirectly(Entry entry){ final HttpGet get = new HttpGet(entry.getLink()); try { HttpResponse response = httpClient.execute(get, context); int statusCode = response.getStatusLine().getStatusCode(); if ( (statusCode >= 400) && (statusCode <= 600) ) { logger.error("Couldn't get content from " + get.getURI().toString() + "\n" + response.toString()); } else { HttpEntity entity = response.getEntity(); if (entity != null) { String htmlContent = EntityUtils.toString(entity).trim(); entry.setHtml(htmlContent); EntityUtils.consumeQuietly(entity); } } } catch (Exception e) { } finally { get.releaseConnection(); } } @Override protected void compute() { if (entries.size() <= 1){ computeDirectly(entries.get(0)); return; } int split = entries.size() / 2; invokeAll(new ContentDownloader(httpClient, entries.subList(0, split)), new ContentDownloader(httpClient, entries.subList(split, entries.size()))); } } And the question is - what is the best practice to use multi threaded HttpClient, may be there is a some rules for setting up ConnectionManager and HttpClient? How can I use all of 80 connections and raise network usage? If necessary, I will provide more code.

    Read the article

  • What does the `forall` keyword in Haskell/GHC do?

    - by JUST MY correct OPINION
    I've been banging my head on this one for (quite literally) years now. I'm beginning to kinda/sorta understand how the foreach keyword is used in so-called "existential types" like this: data ShowBox = forall s. Show s => SB s (This despite the confusingly-worded explanations of it in the fragments found all around the web.) This is only a subset, however, of how foreach is used and I simply cannot wrap my mind around its use in things like this: runST :: forall a. (forall s. ST s a) -> a Or explaining why these are different: foo :: (forall a. a -> a) -> (Char,Bool) bar :: forall a. ((a -> a) -> (Char, Bool)) Or the whole RankNTypes stuff that breaks my brain when "explained" in a way that makes me want to do that Samuel L. Jackson thing from Pulp Fiction. (Don't follow that link if you're easily offended by strong language.) The problem, really, is that I'm a dullard. I can't fathom the chicken scratchings (some call them "formulae") of the elite mathematicians that created this language seeing as my university years are over two decades behind me and I never actually had to put what I learnt into use in practice. I also tend to prefer clear, jargon-free English rather than the kinds of language which are normal in academic environments. Most of the explanations I attempt to read on this (the ones I can find through search engines) have these problems: They're incomplete. They explain one part of the use of this keyword (like "existential types") which makes me feel happy until I read code that uses it in a completely different way (like runST, foo and bar above). They're densely packed with assumptions that I've read the latest in whatever branch of discrete math, category theory or abstract algebra is popular this week. (If I never read the words "consult the paper whatever for details of implementation" again, it will be too soon.) They're written in ways that frequently turn even simple concepts into tortuously twisted and fractured grammar and semantics. (I suspect that the last two items are the biggest problem. I wouldn't know, though, since I'm too much a dullard to comprehend them.) It's been asked why Haskell never really caught on in industry. I suspect, in my own humble, unintelligent way, that my experience in figuring out one stupid little keyword -- a keyword that is increasingly ubiquitous in the libraries being written these days -- are also part of the answer to that question. It's hard for a language to catch on when even its individual keywords cause years-long quests to comprehend. Years-long quests which end in failure. So... On to the actual question. Can anybody completely explain the foreach keyword in clear, plain English (or, if it exists somewhere, point to such a clear explanation which I've missed) that doesn't assume I'm a mathematician steeped in the jargon?

    Read the article

  • What is the best way to solve an Objective-C namespace collision?

    - by Mecki
    Objective-C has no namespaces; it's much like C, everything is within one global namespace. Common practice is to prefix classes with initials, e.g. if you are working at IBM, you could prefix them with "IBM"; if you work for Microsoft, you could use "MS"; and so on. Sometimes the initials refer to the project, e.g. Adium prefixes classes with "AI" (as there is no company behind it of that you could take the initials). Apple prefixes classes with NS and says this prefix is reserved for Apple only. So far so well. But appending 2 to 4 letters to a class name in front is a very, very limited namespace. E.g. MS or AI could have an entirely different meanings (AI could be Artificial Intelligence for example) and some other developer might decide to use them and create an equally named class. Bang, namespace collision. Okay, if this is a collision between one of your own classes and one of an external framework you are using, you can easily change the naming of your class, no big deal. But what if you use two external frameworks, both frameworks that you don't have the source to and that you can't change? Your application links with both of them and you get name conflicts. How would you go about solving these? What is the best way to work around them in such a way that you can still use both classes? In C you can work around these by not linking directly to the library, instead you load the library at runtime, using dlopen(), then find the symbol you are looking for using dlsym() and assign it to a global symbol (that you can name any way you like) and then access it through this global symbol. E.g. if you have a conflict because some C library has a function named open(), you could define a variable named myOpen and have it point to the open() function of the library, thus when you want to use the system open(), you just use open() and when you want to use the other one, you access it via the myOpen identifier. Is something similar possible in Objective-C and if not, is there any other clever, tricky solution you can use resolve namespace conflicts? Any ideas? Update: Just to clarify this: answers that suggest how to avoid namespace collisions in advance or how to create a better namespace are certainly welcome; however, I will not accept them as the answer since they don't solve my problem. I have two libraries and their class names collide. I can't change them; I don't have the source of either one. The collision is already there and tips on how it could have been avoided in advance won't help anymore. I can forward them to the developers of these frameworks and hope they choose a better namespace in the future, but for the time being I'm searching a solution to work with the frameworks right now within a single application. Any solutions to make this possible?

    Read the article

  • Array not showing in Jlist but filled in console

    - by OVERTONE
    Hey there. been a busy debugger today. ill give ths short version. ive made an array list that takes names from a database. then i put the contents of the arraylist into an array of strings. now i want too display the arrays contents in a JList. the weird thing is it was working earlier. and ive two methods. ones just a little practice too make sure i was adding to the Jlist correctly. so heres the key codes. this is the layout of my code. variables constructor methods in my variables i have these 3 defined String[] contactListNames = new String[5]; ArrayList<String> rowNames = new ArrayList<String>(); JList contactList = new JList(contactListNames); simple enough. in my constructor i have them again. contactListNames = new String[5]; contactList = new JList(contactListNames); //i dont have the array list defined though. printSqlDetails(); // the prinSqldetails was too make sure that the connectionw as alright. and its working fine. fillContactList(); // this is the one thats causing me grief. its where all the work happens. // fillContactListTest(); // this was the tester that makes sure its adding to the list alright. heres the code for fillContactListTest() public void fillContactListTest() { for(int i = 0;i<3;i++) { try { String contact; System.out.println(" please fill the list at index "+ i); Scanner in = new Scanner(System.in); contact = in.next(); contactListNames[i] = contact; in.nextLine(); } catch(Exception e) { e.printStackTrace(); } } } heres the main one thats supposed too work. public void fillContactList() { int i =0; createConnection(); ArrayList<String> rowNames = new ArrayList<String>(); try { Statement stmt = conn.createStatement(); ResultSet namesList = stmt.executeQuery("SELECT name FROM Users"); try { while (namesList.next()) { rowNames.add(namesList.getString(1)); contactListNames =(String[])rowNames.toArray(new String[rowNames.size()]); // this used to print out contents of array list // System.out.println("" + rowNames); while(i<contactListNames.length) { System.out.println(" " + contactListNames[i]); i++; } } } catch(SQLException q) { q.printStackTrace(); } conn.commit(); stmt.close(); conn.close(); } catch(SQLException e) { e.printStackTrace(); } } i really need help here. im at my wits end. i just cant see why the first method would add to the JList no problem. but the second one wont. both the contactListNames array and array list can print fine and have the names in them. but i must be transfering them too the jlist wrong. please help p.s im aware this is long. but trust me its the short version.

    Read the article

  • Where should I create my DbCommand instances?

    - by Domenic
    I seemingly have two choices: Make my class implement IDisposable. Create my DbCommand instances as private readonly fields, and in the constructor, add the parameters that they use. Whenever I want to write to the database, bind to these parameters (reusing the same command instances), set the Connection and Transaction properties, then call ExecuteNonQuery. In the Dispose method, call Dispose on each of these fields. Each time I want to write to the database, write using(var cmd = new DbCommand("...", connection, transaction)) around the usage of the command, and add parameters and bind to them every time as well, before calling ExecuteNonQuery. I assume I don't need a new command for each query, just a new command for each time I open the database (right?). Both of these seem somewhat inelegant and possibly incorrect. For #1, it is annoying for my users that I this class is now IDisposable just because I have used a few DbCommands (which should be an implementation detail that they don't care about). I also am somewhat suspicious that keeping a DbCommand instance around might inadvertently lock the database or something? For #2, it feels like I'm doing a lot of work (in terms of .NET objects) each time I want to write to the database, especially with the parameter-adding. It seems like I create the same object every time, which just feels like bad practice. For reference, here is my current code, using #1: using System; using System.Net; using System.Data.SQLite; public class Class1 : IDisposable { private readonly SQLiteCommand updateCookie = new SQLiteCommand("UPDATE moz_cookies SET value = @value, expiry = @expiry, isSecure = @isSecure, isHttpOnly = @isHttpOnly WHERE name = @name AND host = @host AND path = @path"); public Class1() { this.updateCookie.Parameters.AddRange(new[] { new SQLiteParameter("@name"), new SQLiteParameter("@value"), new SQLiteParameter("@host"), new SQLiteParameter("@path"), new SQLiteParameter("@expiry"), new SQLiteParameter("@isSecure"), new SQLiteParameter("@isHttpOnly") }); } private static void BindDbCommandToMozillaCookie(DbCommand command, Cookie cookie) { long expiresSeconds = (long)cookie.Expires.TotalSeconds; command.Parameters["@name"].Value = cookie.Name; command.Parameters["@value"].Value = cookie.Value; command.Parameters["@host"].Value = cookie.Domain; command.Parameters["@path"].Value = cookie.Path; command.Parameters["@expiry"].Value = expiresSeconds; command.Parameters["@isSecure"].Value = cookie.Secure; command.Parameters["@isHttpOnly"].Value = cookie.HttpOnly; } public void WriteCurrentCookiesToMozillaBasedBrowserSqlite(string databaseFilename) { using (SQLiteConnection connection = new SQLiteConnection("Data Source=" + databaseFilename)) { connection.Open(); using (SQLiteTransaction transaction = connection.BeginTransaction()) { this.updateCookie.Connection = connection; this.updateCookie.Transaction = transaction; foreach (Cookie cookie in SomeOtherClass.GetCookieArray()) { Class1.BindDbCommandToMozillaCookie(this.updateCookie, cookie); this.updateCookie.ExecuteNonQuery(); } transaction.Commit(); } } } #region IDisposable implementation protected virtual void Dispose(bool disposing) { if (!this.disposed && disposing) { this.updateCookie.Dispose(); } this.disposed = true; } public void Dispose() { this.Dispose(true); GC.SuppressFinalize(this); } ~Class1() { this.Dispose(false); } private bool disposed; #endregion }

    Read the article

  • PostgreSQL triggers and passing parameters

    - by iandouglas
    This is a multi-part question. I have a table similar to this: CREATE TABLE sales_data ( Company character(50), Contract character(50), top_revenue_sum integer, top_revenue_sales integer, last_sale timestamp) ; I'd like to create a trigger for new inserts into this table, something like this: CREATE OR REPLACE FUNCTION add_contract() RETURNS VOID DECLARE myCompany character(50), myContract character(50), BEGIN myCompany = TG_ARGV[0]; myContract = TG_ARGV[1]; IF (TG_OP = 'INSERT') THEN EXECUTE 'CREATE TABLE salesdata_' || $myCompany || '_' || $myContract || ' ( sale_amount integer, updated TIMESTAMP not null, some_data varchar(32), country varchar(2) ) ;' EXECUTE 'CREATE TRIGGER update_sales_data BEFORE INSERT OR DELETE ON salesdata_' || $myCompany || '_' || $myContract || ' FOR EACH ROW EXECUTE update_sales_data( ' || $myCompany || ',' || $myContract || ', revenue);' ; END IF; END; $add_contract$ LANGUAGE plpgsql; CREATE TRIGGER add_contract AFTER INSERT ON sales_data FOR EACH ROW EXECUTE add_contract() ; Basically, every time I insert a new row into sales_data, I want to generate a new table where the name of the table will be defined as something like "salesdata_Company_Contract" So my first question is how can I pass the Company and Contract data to the trigger so it can be passed to the add_contract() stored procedure? From my stored procedure, you'll see that I also want to update the original sales_data table whenever new data is inserted into the salesdata_Company_Contract table. This trigger will do something like this: CREATE OR REPLACE FUNCTION update_sales_data() RETURNS trigger as $update_sales_data$ DECLARE myCompany character(50) NOT NULL, myContract character(50) NOT NULL, myRevenue integer NOT NULL BEGIN myCompany = TG_ARGV[0] ; myContract = TG_ARGV[1] ; myRevenue = TG_ARGV[2] ; IF (TG_OP = 'INSERT') THEN UPDATE sales_data SET top_revenue_sales = top_revenue_sales + 1, top_revenue_sum = top_revenue_sum + $myRevenue, updated = now() WHERE Company = $myCompany AND Contract = $myContract ; ELSIF (TG_OP = 'DELETE') THEN UPDATE sales_data SET top_revenue_sales = top_revenue_sales - 1, top_revenue_sum = top_revenue_sum - $myRevenue, updated = now() WHERE Company = $myCompany AND Contract = $myContract ; END IF; END; $update_sales_data$ LANGUAGE plpgsql; This will, of course, require that I pass several parameters around within these stored procedures and triggers, and I'm not sure (a) if this is even possible, or (b) practical, or (c) best practice and we should just put this logic into our other software instead of asking the database to do this work for us. To keep our table sizes down, as we'll have hundreds of thousands of transactions per day, we've decided to partition our data using the Company and Contract strings as part of the table names themselves so they're all very small in size; file IO for us is faster and we felt we'd get better performance. Thanks for any thoughts or direction. My thinking, now that I've written all of this out, is that maybe we need to write stored procedures where we pass our insert data as parameters, and call that from our other software, and have the stored procedure do the insert into "sales_data" then create the other table. Then, have a second stored procedure to insert new data into the salesdata_Company_Contract tables, where the table name is passed to the stored proc as a parameter, and again have that stored proc do the insert, then update the main sales_data table afterward. What approach would you take?

    Read the article

  • How to design a data model that deals with (real) contracts?

    - by Geoffrey
    I was looking for some advice on designing a data model for contract administration. The general life cycle of a contract is thus: Contract is created and in a "draft" state. It is viewable internally and changes may be made. Contract goes out to vendor, status is set to "pending" Contract is rejected by vendor. At this state, nothing can be done to the contract. No statuses may be added to the collection. Contract is accepted by vendor. At this state, nothing can be done to the contract. No statuses may be added to the collection. I obviously want to avoid a situation where the contract is accepted and, say, the amount is changed. Here are my classes: [EnforceNoChangesAfterDraftState] public class VendorContract { public virtual Vendor Vendor { get; set; } public virtual decimal Amount { get; set; } public virtual VendorContact VendorContact { get; set; } public virtual string CreatedBy { get; set; } public virtual DateTime CreatedOn { get; set; } public virtual FileStore Contract { get; set; } public virtual IList<VendorContractStatus> ContractStatus { get; set; } } [EnforceCorrectWorkflow] public class VendorContractStatus { public virtual VendorContract VendorContract { get; set; } public virtual FileStore ExecutedDocument { get; set; } public virtual string Status { get; set; } public virtual string Reason { get; set; } public virtual string CreatedBy { get; set; } public virtual DateTime CreatedOn { get; set; } } I've omitted the filestore class, which is basically a key/value lookup to find the document based on its guid. The VendorContractStatus is mapped as a many-to-one in Nhibernate. I then use a custom validator as described here. If anything but draft is returned in the VendorContractStatus collection, no changes are allowed. Furthermore the VendorContractStatus must follow the correct workflow (you can add a rejected after a pending, but you can't add anything else to the collection if a reject or accepted exists, etc.). All sounds alright? Well a colleague has argued that we should simply add an "IsDraft" bool property to VendorContract and not accept updates if IsDraft is false. Then we should setup a method inside of VendorContractStatus for updating the status, if something gets added after a draft, it sets the IsDraft property of VendorContract to false. I do not like this as it feels like I'm dirtying up the POCOs and adding logic that should persist in the validation area, that no rules should really exist in these classes and they shouldn't be aware of their states. Any thoughts on this and what is the better practice from a DDD perspective? From my view, if in the future we want more complex rules, my way will be more maintainable over the long run. Say we have contracts over a certain amount to be approved by a manager. I would think it would be better to have a one-to-one mapping with a VendorContractApproval class, rather than adding IsApproved properties, but that's just speculation. This might be splitting hairs, but this is the first real gritty enterprise software project we've done. Any advice would be appreciated!

    Read the article

  • Please recommend me intermediate-to-advanced Python books to buy.

    - by anonnoir
    I'm in the final year, final semester of my law degree, and will be graduating very soon. (April, to be specific.) But before I begin practice, I plan to take 2 two months off, purely for serious programming study. So I'm currently looking for some Python-related books, gauged intermediate to advanced, which are interesting (because of the subject matter itself) and possibly useful to my future line of work. I've identified 2 possible purchases at the moment: Natural Language Processing with Python. The law deals mostly with words, and I've quite a number of ideas as to where I might go with NLP. Data extraction, summaries, client management systems linked with document templates, etc. Programming Collective Intelligence. This book fascinates me, because I've always liked the idea of machine learning (and I'm currently studying it by the side too, for fun). I'd like to build/play around with Web 2.0 applications; and who knows if I can apply some of the things I learn to my legal work. (E.g. Playground experiments to determine how and under what circumstances judges might be biased, by forcing algorithms to pore through judgments and calculate similarities, etc.) Please feel free to criticize my current choices, but do at least offer or recommend other books that I should read in their place. My budget can deal with 4 books, max. These books will be used heavily throughout the 2 months; I will be reading them back to back, absorbing the explanations given, and hacking away at their code. Also, the books themselves should satisfy 2 main criteria: Application. The book must teach how to solve problems. I like reading theory, but I want to build things and solve problems first. Even playful applications are fine, because games and experiments always have real-world applications sooner or later. Readability. I like reading technical books, no matter how difficult they are. I enjoy the effort and the feeling that you're learning something. But the book shouldn't contain code or explanations that are too cryptic or erratic. Even if it's difficult, the book's content should be accessible with focused reading. Note: I realize that I am somewhat of a beginner to the whole programming thing, so please don't put me down. But from experience, I think it's better to aim up and leave my comfort zone when learning new things, rather than to just remain stagnant the way I am. (At least the difficulty gives me focus: i.e. if a programmer can be that good, perhaps if I sustain my own efforts I too can be as good as him someday.) If anything, I'm also a very determined person, so two months of day-to-night intensive programming study with nothing else on my mind should, I think, give me a bit of a fighting chance to push my programming skills to a much higher level.

    Read the article

  • How to add addtional disks to a Windows 2008 KVM based Guest?

    - by taazaa
    I have a Win 2008 KVM based guest VM running on a Ubuntu 10 host. It is a raw image of 22G. I want to add a "data" drive which would show up as "D:\" drive on the guest. I first created a raw image using: qemu-img create -f raw ~/vmdisk2.img 50G Then, tried attaching it using virsh attach-disk. When that did not work, I tried editing the xml file of the VM directly. Both did not seem to work. I would greatly appreciate any help on how to do this and what the best practice is. I want to keep the base image small, so that I can clone it (hopefully) and then attach necessary storage based on the application at hand. Update: The xml of the vm before adding the second drive: <domain type='kvm'> <name>win08e-vm1</name> <uuid>183a4ba0-1c0b-0b04-ad01-aa7c3a4cb390</uuid> <memory>1048576</memory> <currentMemory>1048576</currentMemory> <vcpu>2</vcpu> <os> <type arch='x86_64' machine='pc-0.12'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='localtime'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/var/lib/libvirt/images/win08e-vm1.img'/> <target dev='hda' bus='ide'/> <address type='drive' controller='0' bus='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/home/taazaa/iso/Win08ER264.iso'/> <target dev='hdc' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='1' unit='0'/> </disk> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <interface type='bridge'> <mac address='52:54:00:7f:a7:ae'/> <source bridge='br0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <input type='tablet' bus='usb'/> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' keymap='en-us'/> <video> <model type='vga' vram='9216' heads='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </memballoon> </devices> </domain> Thanks!

    Read the article

  • No data when attempting to get JSONP data from cross domain PHP script

    - by Alex
    I am trying to pull latitude and longitude values from another server on a different domain using a singe id string. I am not very familiar with JQuery, but it seems to be the easiest way to go about getting around the same origin problem. I am unable to use iframes and I cannot install PHP on the server running this javascript, which is forcing my hand on this. My queries appear to be going through properly, but I am not getting any results back. I was hoping someone here might have an idea that could help, seeing as I probably wouldn't recognize most obvious errors here. My javascript function is: var surl = "http://...omitted.../pull.php"; var idnum = 5a; //in practice this is defined above alert("BEFORE"); $.ajax({ url: surl, data: {id: idnum}, dataType: "jsonp", jsonp : "callback", jsonp: "jsonpcallback", success: function (rdata) { alert(rdata.lat + ", " + rdata.lon); } }); alert("BETWEEN"); function jsonpcallback(rtndata) { alert("CALLED"); alert(rtndata.lat + ", " + rtndata.lon); } alert("AFTER"); When my javascript is run, the BEFORE, BETWEEN and AFTER alerts are displayed. The CALLED and other jsonpcallback alerts are not shown. Is there another way to tell if the jsoncallback function has been called? Below is the PHP code I have running on the second server. I added the count table to my database just so that I can tell when this script is run. Every time I call the javascript, count has had an extra item inserted and the id number is correct. <?php header("content-type: application/json"); if (isset($_GET['id']) || isset($_POST['id'])){ $db_handle = mysql_connect($server, $username, $password); if (!$db_handle) { die('Could not connect: ' . mysql_error()); } $db_found = mysql_select_db($database, $db_handle); if ($db_found) { if (isset($_POST['id'])){ $SQL = sprintf("SELECT * FROM %s WHERE loc_id='%s'", $loctable, mysql_real_escape_string($_POST['id'])); } if (isset($_GET['id'])){ $SQL = sprintf("SELECT * FROM %s WHERE loc_id='%s'", $loctable, mysql_real_escape_string($_GET['id'])); } $result = mysql_query($SQL, $db_handle); $db_field = mysql_fetch_assoc($result); $rtnjsonobj -> lat = $db_field["lat"]; $rtnjsonobj -> lon = $db_field["lon"]; if (isset($_POST['id'])){ echo $_POST['jsonpcallback']. '('. json_encode($rtnjsonobj) . ')'; } if (isset($_GET['id'])){ echo $_GET['jsonpcallback']. '('. json_encode($rtnjsonobj) . ')'; } $SQL = sprintf("INSERT INTO count (bullshit) VALUES ('%s')", $_GET['id']); $result = mysql_query($SQL, $db_handle); $db_field = mysql_fetch_assoc($result); } mysql_close($db_handle); } else { $rtnjsonobj -> lat = 404; $rtnjsonobj -> lon = 404; echo $_GET['jsonpcallback']. '('. json_encode($rtnjsonobj) . ')'; }?> I am not entirely sure if the jsonp returned by this PHP is correct. When I go directly to the PHP script without including any parameters, I do get the following. ({"lat":404,"lon":404}) The callback function is not included, but that much can be expected when it isn't included in the original call. Does anyone have any idea what might be going wrong here? Thanks in advance!

    Read the article

  • C#: Parallel forms, multithreading and "applications in application"

    - by Harry
    First, what I need is - n WebBrowser-s, each in its own window doing its own job. The user should be able to see them all, or just one of them (or none), and to execute commands on each one. There is a main form, without a browser, this one contains control panel for my application. The key feautre is, each browser logs on to secured web page and it needs to stay logged in as long as possible. Well, I've done it, but I'm afraid something is wrong with my approach. The question is: Is code below valid, or rather a nasty hack which can cause problems: internal class SessionList : List<Session> { public SessionList(Server main) { MyRecords.ForEach(record => { var st = new System.Threading.Thread((data) => { var s = new Session(main, data as MyRecord); this.Add(s); Application.Run(s); Application.ExitThread(); }); st.SetApartmentState(System.Threading.ApartmentState.STA); st.Start(record); }); } // some other uninteresting methods here... } What's going on here? Session inherits from Form, so it creates a form, puts WebBrowser into it, and has methods to operate on websites. WebBrowser requires to be run in STA thread, so we provide one for each browser. The most interesting part of it is Application.Run(s). It makes the newly created forms alive and interactive. The next Application.ExitThread() is called after browser window is closed and its controls disposed. Main application stays alive to perform the rest of the cleanup job. When user select "Exit" or "Shutdown" option - first the browser threads are ended, so Application.ExitThread() is called. It all works, but everywhere I can read about "main GUI thread" - and here - I've created many GUI threads. I handle communication between main form and my new forms (sessions) with thread-safe methods using Invoke(). It all works, so is it right or is it wrong? Is everything right with using Application.Run() more than once in one application? :) An ugly hack or a normal practice? This code dies if I start a WebBrowser from the session form thread. It beats me why. It works however if I start WebBrowser (by changing its Url property) from any other thread. I'd like to know more what is really happening in such application. But most of all - I'd like to know if my idea of "applications in application" is OK. I'm not sure what exactly does Application.Run() do. Without it forms created in new threads were dead unresponsive. How is it possible I can call Application.Run() many times? It seems to do exactly what it should, but it seems a little undocumented feature to me. I'm almost sure, that the crashes are caused by WebBrowser component itself (since it's not completely "managed" and "native"). But maybe it's something else.

    Read the article

< Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >