Search Results

Search found 16809 results on 673 pages for 'nothing 2 lose'.

Page 123/673 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • How can I invoke a .Net DLL from a LabView 6.1 VI?

    - by tw1k
    I work in a manufacturing company that uses LabView for testing the devices we make. Most of the test engineers are using 7.1 which can natively reference a .Net assembly. However, there is a group that is stuck on LabView 6.1. I would like for them to be able to use my .Net assembly which is basically a proxy to some web services. I have created a test assembly that is nothing more than Hello World, and I'm trying to consume it in a VI. I made it COM visible, and registered it with regasm.exe and created a type library, which I'm not sure I need. I can see it in Visual Studio in the list of COM objects when I open the Add Reference window, so I know it's registered properly. I'm very unfamiliar with VI's. I'm only looking at it because no one I have spoken to in manufacturing knows anything about invoking a COM object in a VI. I'm basically looking for some names of controls or menu options to get the test engineers pointed in the right direction. I did a bunch of web searching on Google and the NI forums, but didn't find much. Alternatively, would it be easier to write a C or C++ DLL that acts as a proxy to my .Net DLL? Or is there a simple way to invoke a web service from a VI? That might obviate the need for a DLL altogether. I'm currently reading through this document from NI for help, but it obviously knows nothing about .Net and might not be able to help me choose the best path forward.

    Read the article

  • No audio with streaming video

    - by Chris Barnhill
    I am having trouble with audio when playing streaming videos. My sound card is fine. I know this because if I play sounds from my local machine, there's no problem. It's only when I try to play sounds from the internet that I lose audio. This only started happening recently when I did 2 things: I connected a USB headphone/microphone set to record screencasts I began recording/publishing screencasts from screenr.com. I have tried playing video both with the headset connected and without it connected: it makes no difference. If I record a screencast on screenr.com and preview it, I hear the audio. But once I publish is and play it, there is no audio. I also hear no audio with YouTube videos. I really hope someone can help. Thanks. The latest is that the problem went away after I powered my system off and on. A reboot didn't do it, I had to actually shut down the power.

    Read the article

  • The best way to hide data Encryption,Connection,Hardware

    - by Tico Raaphorst
    So to say, if i have a VPS which i own now, and i wanted to make the most secure and stable system that i can make. How would i do that? Just to try: I installed debian 7 with LVM Encryption via installation: You get the 2 partitions a /boot and a encrypted partition. When booting you will be prompted to fill in the password to unlock the encryption of the encrypted partition, Which then will have more partitions like /home /usr and swapspace which will automatically mount. Now, i do need to fill in the password over a VNC-SSL connection via the control panel website of the VPS hoster, so they can see my disk encryption password if they wanted to, they have the option if they wanted to look at what i have as data right? Data encryption on VPS , Is it possible to have a 100% secure virtual private server? So lets say i have my server and it is sitting well locked next to me, with the following examples covered bios (you have to replace bios) raid (you have to unlock raid-config) disk (you have to unlock disk encryption) filelike-zip-tar (files are stored in encrypted archives) which are in some other crypted file mounted as partition (archives mounted as partitions) all on the same system So it will be slow but it would be extremely difficult to crack the encryption. So to say if you stole the server. Then i only need to make the connection like ssh safer with single use passwords, block all incoming and outgoing connections but give one "exception" for myself. And maybe one for if i somehow lose my identity for the "exeption" What other overkill but realistic security options are available, i have heard about SElinux?

    Read the article

  • Is it possible to do a 301 redirect AND redirect to the requested resource?

    - by Pure.Krome
    For one of our projects, we're doing a rebranding of the website name, logo, etc... As such, we need to 301 Moved Permenantly redirect all users from the old domain to the new domain. With IIS7, that's pretty simple. We just create a new website that redirects all traffic to a host-headered domain .. to the new one. But this loses their original destination resource. eg. Old Domain: www.OldDomain.com New Domain: www.NewDomain.com User: www.OldDomain.com/user/PureKrome -> 301 --> www.newDomain.com Notice how it's going to the new domain BUT not to /user/PureKrome? How can I do this so it goes to the new domain and keeps the original resource request? I'm guessing URL-ReWriter for IIS7 might help? Also, what happens if I want to do this... CurrentDomain 1: Domain.com CorrectDomain 1: www.Domain.com CurrentDomain 2: AnotherDomain.com CorrectDomain 2: www.AnotherDomain.com Is it also possible to have those in the same IIS website? So any URL to domain.com will 301 to www.domain.com Right now I'm making 2 IIS websites, with a 301 hardcoded (which still means I lose the original resource request, too). Help!

    Read the article

  • Ubuntu on VPS becomes unresponsive: BUG: soft lockup - CPU#0 stuck for 22s

    - by Bhante Nandiya
    We have a VPS running Ubuntu, on Xen. The problem is this, about once a day, for about 20-50 minutes, at a random time, the server becomes completely unresponsive to the outside world. After this period, it becomes responsive again, as if nothing had happened, it doesn't lose uptime, it doesn't restart. It just starts responding again as if it had been in suspended animation. These outages occur under conditions of non-exceptional memory and cpu, for example 70% mem, 5% cpu. I have stopped all non-essential services so the usage is very even. These outages don't particularly occur during times of increased memory/cpu (during daily tasks), they sometimes occur at times of very low cpu use (<2%), but in the past also occured during swapping. These blackouts have been occurring both under Ubuntu 12.04 LTS, and Ubuntu 14.04 LTS - no change at all (I upgraded Ubuntu specifically to see if it helped this problem). It is possible to log into our webhosts site, and use their administration console to see error messages from during this time. Presumably, these messages are from the Xen virtualization, the main message goes like this: BUG: soft lockp - CPU#0 stuck for 22s! [ksoftireqd/0:3] (repeats many times) SysRq : Emergency Sync (Sometimes this is the only message in the console) Others seen previously under different load situations include: BUG: soft lockup - CPU#0 stuck for 22s! [swapper/0:0] (repeated many times) or: INFO: rcu_sched detected stall on CPU 0 (t=15000 jiffies) (repeated many times with t getting bigger) From googling around I've tried various kernel parameters such as nohz=off and acpi=off to no avail. All tech support has said is that other Ubuntu installations are not suffering the same problem. Anyone got any ideas or experience with this problem?

    Read the article

  • 100% uptime for a web application

    - by Chris Lively
    We received an interesting "requirement" from a client today. They want 100% uptime with off-site failover on a web application. From our web application's viewpoint, this isn't an issue. It was designed to be able to scale out across multiple database servers, etc. However, from a networking issue I just can't seem to figure out how to make it work. In a nutshell, the application will live on servers within the client's network. It is accessed by both internal and external people. They want us to maintain an off-site copy of the system that in the event of a serious failure at their premises would immediately pick up and take over. Now we know there is absolutely no way to resolve it for internal people (carrier pigeon?), but they want the external users to not even notice. Quite frankly, I haven't the foggiest idea of how this might be possible. It seems that if they lose Internet connectivity then we would have to do a DNS change to forward traffic to the external machines... Which, of course, takes time. Ideas? UPDATE I had a discussion with the client today and they clarified on the issue. They stuck by the 100% number, saying the application should stay active even in the event of a flood. However, that requirement only kicks in if we host it for them. They said they would handle the uptime requirement if the application lives entirely on their servers. You can guess my response.

    Read the article

  • Is it bad to redirect http to https?

    - by jasondavis
    I just installed an SSL Certificate on my server. I use a web hosting panel called ZPanel that is an open source project. It then set up a redirect for all traffic on my domain on Port 80 to redirect it to Port 443. In other words, all my http://example.com traffic is now redirected to the appropriate https://example.com version of the page. The redirect is done in my Apache Virtual Hosts file with something like this... RewriteEngine on ReWriteCond %{SERVER_PORT} !^443$ RewriteRule ^/(.*) https://%{HTTP_HOST}/$1 [NC,R,L] My question is, are there any drawbacks to using SSL? Since this is not a 301 Redirect, will I lose link juice/ranking in search engines by switching to https? I appreciate the help. I have always wanted to set up SSL on a server, just for the practice of doing it, and I finally decided to do it tonight. It seems to be working well so far, but I am not sure if it's a good idea to use this on every page. My site is not eCommerce and doesn't handle sensitive data; it's mainly for looks and the thrill of installing it for learning. UPDATED ISSUE Strangely Bing creates this screenshot from my site now that it is using HTTPS everywhere...

    Read the article

  • DotNetOpeId - Using OpenIdButton for Google Apps

    - by JediPotPie
    I am an ASP.NET newbie and I am trying to design an OpenID/SSO system for an internal web application. The web application is pretty simple and the authentication is currently being managed by a database with usernames and passwords. I want to replace the existing accounts stored in the database with Google Apps accounts. I have downloaded the latest DotNetOpenAuth-3.4.3.10103 package and got the OpenIdRelyingPartyWebForms sample up and running on IIS. I have built my own login page using just a OpenIdButton object that points to a development Google domain. The button seems to work fine in FireFox, at least it is forwarding me to the Google Apps login, but nothing happens when I load the same page in IE. When I click the Google button, nothing happens, zip. The same is true for the Yahoo button in the login.aspx page given in the sample. Here is the .aspx code I am using... <rp:OpenIdButton runat="server" ImageUrl="http://www.google.com/accounts/google_transparent.gif" Text="Login with Google!" ID="googleLoginButton" Identifier="https://www.google.com/accounts/o8/site-xrds?hd=dev.connexcloud.com" />

    Read the article

  • I cannot connect to home server after a few hours

    - by Iago
    I have an old PC and I decided to revive it. A LAMP (for my own use) and a P2P server (torrent and e2dk). My old PC is an AMD Athlon XP (1400 MHz) with 384 Mb of RAM First of all I installed Ubuntu Server 11.10, SSH, FTP, SAMBA and LAMP. With this configuration my home server works well, with no problem. Then I went to the P2P server and I tried rTorrent and then uTorrent Server Alpha. And here is my problem. After a few hours (maybe 10 hours, or maybe 30 hours) with the torrent app running (rTorrent or uTorrent) I lose the connection to my home server. That is, I cannot access via ssh, I cannot access the apache server, etc. but I can ping the home server. It seems that the server freezes and all I can do is reboot the server physically. So, I have two questions: What is the problem? and How can I solve it?

    Read the article

  • computer fails to boot during/after POST for five or six boots, then works

    - by N13
    For the last few days, my computer has had issues booting. I've seen two different behaviors: The screen displays the graphics card information, then begins to list the RAM, hard drives, etc. At different points in this process (after the graphics info), the computer shuts off. After five or six attempts, it then boots normally. In roughly the same time frame, the computer freezes, and fails to boot. I think it boots successfully on the next attempt. I've also noticed that in some instances, the computer freezes on shutdown. It gets right to the point where it should shut off, but doesn't. I recently combined the best parts of two different machines into this one. I'm booting to GRUB, with Ubuntu 12.04, Linux Mint 11 and Windows Vista (unfortunately) as my OS options. It has an Enermax Modu82+ 525W power supply, and I've used an online calculator to determine that my load shouldn't exceed 400W. I even unplugged a hard drive, but that didn't help. I found the latest BIOS, patched it and checked the settings, but that didn't fix it. I'm fairly certain this issue didn't exist at first, but might have started when the power at my new apartment dropped for a second. The machine is plugged into a surge protector strip, but it's old and I've heard they lose effectiveness with age. Is a power dip as damaging as a spike? If something were damaged, why would it boot successfully after five or six attempts? It's almost like the BIOS or PSU need to be primed. The trouble with debugging is that there seems to be a "grace period" after shutdown where the issue doesn't present itself again. What should I try next?

    Read the article

  • Losing SQL connections

    - by john pavelka
    sql servr 2005 - Standard; one dedicated sql server (VM); windows server 2003; Small databases; About once a week we lose all sql connections. It seems to fix itself after about 5-10 minutes. System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. --- System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. We don't have a fully qualified DBA; it's kind of a joint effort here. Can somebody give me some general ideas for troubleshooting the network side and the application side? We already ran a few tuning profiles and ran through Database Tuning Advisor to apply indexing recommendations. It would sure be nice if there was a way to take a snapshot of what was running on sql server when these 100% cpu spikes occured, but sometimes we're not around. Is it common to throttle CPU for certain processes? Can this be done with Windows server 2003? For example, if security apps were making cpu spike to 100%, is there a way to limit their cpu usage? Any advice is appreciated. thanks,

    Read the article

  • Ubuntu + virtualenv = a mess? virtualenv hates dist-packages, wants site-packages

    - by lostincode
    Can someone please explain to me what is going on with python in ubuntu 9.04? I'm trying to spin up virtualenv, and the --no-site-packages flag seems to do nothing with ubuntu. I installed virtualenv 1.3.3 with easy_install (which I've upgraded to setuptools 0.6c9) and everything seems to be installed to /usr/local/lib/python2.6/dist-packages I assume that when installing a package using apt-get, it's placed in /usr/lib/python2.6/dist-packages/ ? The issue is, there is a /usr/local/lib/python2.6/site-packages as well that just sits there being empty. It would seem (by looking at the path in a virtualenv) that this is the folder virtualenv uses as backup. Thus even thought I omit --no-site-packages, I cant access my local systems packages from any of my virtualenv's. So my questions are: How do I get virtualenv to point to one of the dist-packages? Which dist-packages should I point it to? /usr/lib/python2.6/dist-packages or /usr/local/lib/python2.6/dist-packages/ What is the point of /usr/lib/python2.6/site-packages? There's nothing in there! Is it first come first serve on the path? If I have a newer version of package XYZ installed in /usr/local/lib/python2.6/dist-packages/ and and older one (from ubuntu repos/apt-get) in /usr/lib/python2.6/dist-packages, which one gets imported when I import xyz? I'm assuming this is based on the path list, yes? Why the hell is this so confusing? Is there something I'm missing here? Where is it defined that easy_install should install to /usr/local/lib/python2.6/dist-packages? Will this affect pip as well? Thanks to anyone who can clear this up!

    Read the article

  • Is it ok to share private key file between multiple computers/services?

    - by Behrang
    So we all know how to use public key/private keys using SSH, etc. But what's the best way to use/reuse them? Should I keep them in a safe place forever? I mean, I needed a pair of keys for accessing GitHub. I created a pair from scratch and used that for some time to access GitHub. Then I formatted my HDD and lost that pair. Big deal, I created a new pair and configured GitHub to use my new pair. Or is it something that I don't want to lose? I also needed a pair of public key/private keys to access our company systems. Our admin asked me for my public key and I generated a new pair and gave it to him. Is it generally better to create a new pair for access to different systems or is it better to have one pair and reuse it to access different systems? Similarly, is it better to create two different pairs and use one to access our companies systems from home and the other one to access the systems from work, or is it better to just have one pair and use it from both places?

    Read the article

  • Reading JSON with Javascript/jQuery

    - by Josephine
    I'm building a game in javascript/html5 and I'm trying to build a database of locked doors in a maze that can be loaded from and overwritten to throughout gameplay. I've found a large number of tutorials online, but nothing is working. I was wondering if someone could look at what I'm trying and let me know what I'm doing wrong. My JSON file looks like this: { "doors": [ {"left":true, "right":false, "bottom":false}, {"left":false, "right":false, "bottom":false}, {"right":false, "bottom":false, "top":false}, {"left":false, "right":false, "top":false} ] } I want to build the HTML page so that when a player collides with a door it checks if its locked or not like: if (player.x < leftDoor.x + leftDoor.width && player.x + player.width > leftDoor.x && player.y < leftDoor.y + leftDoor.height && player.y + player.height > leftDoor.y) { if(doors[0].left == true) alert("door is locked"); else window.location = ( "2.html?p1="); } However I'm having trouble reading from the JSON file itself. I've tried things like: function loadJson() { $(document).ready(function() { $.getJSON('info.json', function(doors) { alert(doors[0].left); }); }); } But nothing happens, and I need to be able to access the information in the HTML as well. I'd rather use jQuery, but I'm not opposed to straight JS if it works. I've been trying to do this for ages and I'm getting absolutely no where. If someone could help that would be amazing. Thanks!

    Read the article

  • flex 4 release changes to application are not showing up.

    - by guacamoly
    I just took over a clients flex project and I can't get the app to reflect even a simple trace statement. Before I took over, the project was last successfully built using the Flash Builder Beta 2 environment/sdk. I have the latest release version of Flash Builder 4. Upon importing the project into FB4, I got a ton of errors. Most of them mostly because of the changes made to the sdk from beta2 to release. Some of the things I corrected: - mx namespace from library://ns.adobe.com/flex/halo to library://ns.adobe.com/flex/mx - video player skinning: a lot of the state names for the video player component had been changed, more required states had been added. also there were other video related component and property names that had to be updated. But I fixed all that and the application was finally able to compile (although with some warnings mostly of the duplicate variable type) The only thing now is that whatever change I make to the project doesn't get reflected in the build (debug or release). I changed existing traces, added additional traces. Nothing shows up. I even removed the applicationComplete property in the main.mxml. Everything still ran like nothing changed. Also I can't seem to debug the app. Whenever I try to debug.. flash builder says.. "Swf Application doesn't contain the required debugging information ... " Anyone have any idea how I need to even begin tackling all this? Any help or pointers would be greatly appreciated.

    Read the article

  • Send file FTP over SSL with custom port number

    - by JM4
    I have asked the question before but in a different manner. I am trying taking form data, compiling into a temporary CSV file and trying to send over to a client via FTP over SSL (this is the only route I am interested in hearing solutions for unless there is a workaround to doing this, I cannot make changes). I have tried the following: ftp_connect - nothing happens, the page just times out ftp_ssl_connect - nothing happens, the page just times out curl library - same thing, given URL it also gives error. I am given the following information: FTPS Server IP Address TCP Port (1234) Username Password Data Directory to dump file FTP Mode: Passive very, very basic code (which I believe should initiate a connection at minimum): Code: <?php $ftp_server = "00.000.00.000"; //masked for security $ftp_port = "1234"; // masked but not 990 $ftp_user_name = "username"; $ftp_user_pass = "password"; // set up basic ssl connection $conn_id = ftp_ssl_connect($ftp_server, $ftp_port, "20"); // login with username and password $login_result = ftp_login($conn_id, $ftp_user_name, $ftp_user_pass); echo ftp_pwd($conn_id); // / echo "hello"; // close the ssl connection ftp_close($conn_id); ?> When I run this over a SmartFTP client, everything works just fine. I just can't get it to work using PHP (which is a necessity). Has anybody had success doing this in the past? I would be very interested to hear your approach.

    Read the article

  • MBR seems to be gone

    - by bobobobo
    So, horror story for everyone. I bought two spanking new HDD's. MM!! Gbitage. I removed all my old HDD's, physically labelled them, and was preparing to install all new HDD's (fresh sys install included!) To make sure what HDD was what, I popped each OLD HDD (data filleD!) into a Thermaltake Blacx toaster.. surprisingly BOTH couldn't be read. I didn't have static on my hands! I'm certain of it. I touched metal, touched wood, before beginning this all. Thinking that was strage, I hauled up the new sys, installed Win XP (of course!) on the new HDD, and now the two OLD HDD's (data filled!) that were entered into the toaster cannot be read. And they had tons of data on them. I read about MBR's being nuked and it sounds like that is what it is. But I'm at a loss what to do. There are so many MBR recovery programs out there, I kind of feel overwhelmed. I don't want to lose my data by just pikcing one, yet it seems so close within reach, I'm not panicking anymore.. Anybody have a play by play that I could follow? I just don't want to spend $900 on data recovery centers if I can do this myself..

    Read the article

  • Mixing menuItem.setIntent with onOptionsItemSelected doesn't work

    - by superjos
    While extending a sample Android activity that fires some other activities from its menu, I came to have some menu items handled within onOptionsItemSelected, and some menu items (that just fired intents) handled by calling setIntent within onCreateOptionsMenu. Basically something like: @Override public boolean onCreateOptionsMenu(Menu menu) { super.onCreateOptionsMenu(menu); menu.add(0, MENU_ID_1, Menu.NONE, R.string.menu_text_1); menu.add(0, MENU_ID_2, Menu.NONE, R.string.menu_text_2); menu.add(0, MENU_ID_3, Menu.NONE, R.string.menu_text_3). setIntent(new Intent(this, MyActivity_3.class)); return true; } @Override public boolean onOptionsItemSelected(MenuItem item) { super.onOptionsItemSelected(item); switch (item.getItemId()) { case (MENU_ID_1): // Process menu command 1 ... return true; case (MENU_ID_2): // Process menu command 2 ... // E.g. also fire Intent for MyActivity_2 return true; default: return false; } } Apparently, in this situation the Intent set on MENU_ID_3 is never fired, or anyway the related activity is never started. Android javadoc at some point goes like <<[if you set an intent on a menu item] and nothing else handles the item, then the default behavior will be to [start the activity with the intent]. What does it actually mean "and nothing else handles the item"? Is it enough to return false from onOptionsItemSelected? I also tried not to call super.onOptionsItemSelected(item) at the beginning and only invoke it in the default switch case, but I had same results. Does anyone have any suggestion? Does Android allow to mix the two type of handling? Thanks for your time everyone.

    Read the article

  • Productive Writing Software

    - by Nick Retallack
    What do you guys use to write a personal journal, notes, and reference information that you want to group and search through later? I'm not one of those crazy people who likes to share their journal with the world. Some things I like to keep to myself. It's quite nice to have a reference. Preferences for cross-platform stuff (windows, mac, linux) Some Background For Windows, there's a program called Yeah Write, which really changed my expectations about how easy it should be to start writing something. You don't open or close files -- just click an empty slot and start writing, or click a filled slot to work on another file. And you can organize things into categories by creating tabs. Now that I carry a Macbook, I've just been using TextEdit. I like it because I can't lose my work when my computer crashes: it auto-saves everything and restores it when I launch TextEdit again. But I make a mess, leave thirty open files, and save everything to one directory with no tags for easy grouping. Saving files and selecting the directory they should be in is too clunky to do in the middle of a meeting. And organizing files into folders in the Finder is a pain, since there's no tree view like on Windows. Sure, I'm lazy, but I miss Yeah Write. I'm not going to get a windows laptop just to use it though. Since the laptop I take my notes on is a mac, I'm gonna be biased toward mac solutions.

    Read the article

  • How to add extensions to a lot of files using content of each file?

    - by v8media
    I've got over 10,000 files that don't have extensions from older versions of the Mac OS. They're extremely nested, and they also have all sorts of strange formatting and characters. They don't have file types or creator codes attached to them any longer. A great deal of these files have text in the file that will let me determine extensions (for example Word.Document.8 is in every file created by that version of Word, and Excel.Sheet.8 in every file created with that version of Excel). I found a script that looks like it would work for one of these file types at a time, but it erases parts of filenames after nefarious characters, which is not good. find . -type f -not -name "." -print0 |\ xargs -0 file |\ grep 'Word.Document.8' |\ sed 's/:.*//' |\ xargs -I % echo mv % %.doc So, two questions from that: One is, should I clean the characters in the filenames first, or programmatically deal with those in the script in order to leave them the same? As long as I lose no information from the filenames, I don't see a problem cleaning out slashes and other problem characters. Also, if I clean the filenames, there are likely to be duplicates, so any cleaning script would have to add something like "-1" before the extension to make sure nothing gets lost. 2nd question is how do I change the script so that it will look for more than one file type at the same time and give each the proper extension? I'm not tied to this script, but it is understandable, which is a pro. Mac OS X 10.6 is installed on this file server, but I've got access to any recent versions of OS X. Thanks, Ian

    Read the article

  • Tomcat does not pick up the class file - the JSP file is not displayed

    - by blueSky
    I have a Java code which is a controller for a jsp page, called: HomeController.java. Code is as follows: @Controller public class HomeController { protected final transient Log log = LogFactory.getLog(getClass()); @RequestMapping(value = "/mypage") public String home() { System.out.println("HomeController: Passing through..."); return "home"; } } There is nothing especial in the jsp page: home.jsp. If I go to this url: http://localhost:8080/adcopyqueue/mypage I can view mypage and everything works fine. Also in the tomcat Dos page I can see the comment: HomeController: Passing through... As expected. Now under the same directory that I have HomeController.java, I've created another file called: LoginController.java. Following is the code: @Controller public class LoginController { protected final transient Log log = LogFactory.getLog(getClass()); @RequestMapping(value = "/loginpage") public String login() { System.out.println("LoginController: Passing through..."); return "login"; } } And under the same place which I have home.jsp, I've created login.jsp. Also under tomcat folders, LoginController.class exists under the same folder that HomeController.class exists and login.jsp exists under the same folder which home.jsp exists. But when I go to this url: http://localhost:8080/adcopyqueue/loginpage Nothing is displayed! I think tomcat does not pick up LoginController.class b/c on the tomcat Dos window, I do NOT see this comment: LoginController: Passing through... Instead I see following which I do not know what do they mean? [ INFO] [http-8080-1 01:43:45] (AppInfo.java:populateAppInfo:34) got manifest [ INFO] [http-8080-1 01:43:45] (AppInfo.java:populateAppInfo:36) manifest entrie s 8 The structure and the code for HomeController.java and LoginController.java plus the jsp files match. I have no idea why tomcat sees one of the files and not the other? Clean build did not help. Does anybody have any idea? Any help is greatly appraciated.

    Read the article

  • INSERT DELAYED on locked tables blocks PHP processes to continue

    - by sw0x2A
    Our webservers write some tracking information into a MySQL database (using INSERT DELAYED into MyISAM table). When a huge SELECT query is executed on this table or when it is locked for another reason, the webserver processes (with INSERT DELAYED) are waiting for the database and in some cases the MaxServer limit is reached in Apaches, so they will stop serving requests. We use INSERT DELAYED because The DELAYED option for the INSERT statement is a MySQL extension to standard SQL that is very useful if you have clients that cannot or need not wait for the INSERT to complete. This is a common situation when you use MySQL for logging and you also periodically run SELECT and UPDATE statements that take a long time to complete. Quote from MySQL documentation. I am wondering why the Apache processes are waiting for the INSERT DELAYED to finish. And what can I do to just send the data and forget about it. (Since this is logging data, I do not care if we lose some entries.) Even when the table is locked the PHP script should just go on and should not wait for an answer of MySQL. (We do not want to setup Master-slave for this table but we are thinking about move this data to some NoSQL database. But for now I would like to know why INSERT DELAYED is not working as expected.)

    Read the article

  • How can I create a DOTNET COM interop assembly for Classic ASP that does not sequentially block othe

    - by Alex Waddell
    Setup -- Create a simple COM addin through DOTNET/C# that does nothing but sleep on the current thread for 5 seconds. namespace ComTest { [ComVisible(true)] [ProgId("ComTester.Tester")] [Guid("D4D0BF9C-C169-4e5f-B28B-AFA194B29340")] [ClassInterface(ClassInterfaceType.AutoDual)] public class Tester { [STAThread()] public string Test() { System.Threading.Thread.Sleep(5000); return DateTime.Now.ToString(); } } } From an ASP page, call the test component: <%@ Language=VBScript %> <%option explicit%> <%response.Buffer=false%> <% dim test set test = CreateObject("ComTester.Tester") %> <HTML> <HEAD></HEAD> <BODY> <% Response.Write(test.Test()) set test = nothing %> </BODY> </HTML> When run on a windows 2003 server, the test.asp page blocks ALL OTHER threads in the site while the COM components sleeps. How can I create a COM component for ASP that does not block all ASP worker threads?

    Read the article

  • Is DOM not being loaded ?

    - by Daniel
    I went through episode 88 (Dynamic menus) of the railscasts and when I try to load my *js.erb file in the browser shows me that my fetched data from the controller is getting there Controller def dynamic_departments @departments = Department.all end localhost:3000/javascripts/dynamic_departments.js var departments = new Array(); departments.push(new Array(1,'????',1)); departments.push(new Array(2,'???-???',2)); function facultySelected(){ faculty_id = $('falculty_id').getValue(); options = $('department_id').options; options.length = 1; departments.each(function(department){ if(department[0] == faculty_id){ options[options.length] = new Option(department[1],department[2]) } }); if(options.length == 1){ $('department_field').hide(); } else { $('department_field').show(); } } document.observe('dom:loaded', function(){ alert("DOM loaded"); //$('department_field').hide(); //$('faculty_id').observe('change',facultySelected); }); My routes.rb has the match ':controller/:action.:format' Still...after the page's loaded and I change the value of my collection_select or select nothing happens.What am I missing? *I called the 'alert' and commented the rest to test it....still nothing.

    Read the article

  • WN server filter won't work

    - by Mike Fink
    WN servers have an alternative to cgi programs called filters. I have been trying to get one to work, but I have had no luck. I am writing in python. It looks like the server is not receiving any output from the program but is parsing nothing and wrapping this nothing in my standard header and footer. I have chmod 755 the program and my index.wn file reads: Default-Attributes=parse Default-Wrappers=templates/template1.inc File=includeTests.html File=index.html File=archives.html File=contact.html File=style.css File=testProgram.py #here is the stuff about the filter File=testFilter.html Content-type=text/html Filter=testProgram.py Attributes=parse, cgi here is what is in the filter called testProgram.py: #!/usr/bin/python print "Content-Type: text/html\n\n" print "hi" testProgram.py works perfectly if it is shoved into a cgi-bin folder and chmoded. I suppose my problem may lay with the fact that I have never ever seen a filter program in python. I'm not sure I have even seen a filter program at all. Does anyone out there have any experience with wn servers and filters? Any ideas?

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >