Search Results

Search found 14169 results on 567 pages for 'parallel programming'.

Page 396/567 | < Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >

  • IIS 7.5 truncating POST body containing JSON data with ASP.NET MVC 3

    - by Guneet Sahai
    I'm facing a problem which I hope is a configuration thing with IIS but is right now giving a lot of trouble. Basically I have a controller that accepts a JSON and does some processing. While it generally works fine, but every now and then when the system has some load I get an error. After some painful debugging, we figured the incoming JSON gets truncated which causes the deserialzer to fail. To narrow down the problem - we wrote a simple controller that accepts a JSON and tries to deserialize it. In case it fails it just logs it. This works fine but when I hit it using a load testing tool (JMeter) it throws the same error (truncation) for a few requests. The # of failures increased when I increase parallel connections. It starts showing with 150 concurrent requests. We are running IIS 7 on windows 2008 server with ASP.Net MVC 3 with more or less default configuration of IIS. More information available in my question below http://stackoverflow.com/questions/12662282/content-length-of-http-request-body-size

    Read the article

  • CUPS printer on Vritual Machine can be access via CUPS admin, but not by XP?

    - by SJaguar13
    I have a Zebra label printer connected to a Linux Mint virtual machine. It was set up with CUPS and a Windows XP computer can then print to it via http://192.168.1.76:632/printers/labelprinter. That was all fine and dandy I then hooked up a Fargo Pro L PVC card printer to a Windows XP virtual machine. I had to disconnect the label printer as the server that hosted both virtual machines only has 1 parallel port. Now I plugged in the Zebra again, and it cannot print from the Windows XP computer anymore. If I go to the CUPS admin panel on the Windows XP computer, I can see it, everything looks fine, and I can send it a test page to print which works. If I try to print from Windows, I get an error that the printer is not found/cannot connect to the server. The only other thing that changed was the firewall on the router to allow remote desktop to another computer from outside the network, but all the firewall stuff was for external use. Nothing affected the IP address of the internal network. The Linux Mint VM also had a PDF pritner that was shared with CUPS. That printer is also down. I tried setting up a new CUPS installation on another VM, and when I go to share it with XP, I get the same error. I don't know what to try. It has access, it can get to the admin from that computer, it seems to be up and ready, but when Windows tries to connect, the printer isn't found even though 4 days ago everything was fine. Any ideas?

    Read the article

  • Is there a historical computer peripherals or accessories museum or even just a current list?

    - by zimmer62
    Thinking about all the unique and different peripherals I've owned over the years, from ISA capture cards, to parallel port controlled shutter glasses for 3d games. I've seen many many accessory or computer peripherals come and go. The nostalgia of these things is a lot of fun. I tried to find some sort of historical time-line or list but what mostly turned up is computers themselves. I'm more interested in the mice, scanners, the weird adapters that shouldn't exist, short run very rare products, strange devices from computer shows in the 80's and 90's... Hardware you might find in a geeks basement that would be completely useless now, but was the coolest thing around when it was new. An example would be a drawing tablet I had for my TI-99 computer, or the audio tape player accessory for a C64 which let you save files to audio tapes, An ISA card that did the same for PC's hooked up to a VCR. Remember that IBM-PC Jr upgrade kit, that added a floppy drive, more memory and the AT switch in the back? I'd love to find either a wiki, or a list that has already been assembled which contain many of these weird (or common) accessories. I've had so many over the years I suppose I could start a wiki here if such a list doesn't already exist.

    Read the article

  • Why does clicking on Windows 7 Printer Properties Result In Driver Not Installed?

    - by octopusgrabbus
    The question I need to ask is has anyone heard of getting a "driver not installed" error when clicking on a printer's properties on Windows 7, and is there a workaround? Here are the details of the problem. One of our users has a Windows 7 desktop, and an HP LaserJet 4050 T connected to via a parallel-to-usb converter. The PLC5 universal driver was installed for series 4050 printers. I needed to install the PLC 6 driver, which completed successfully. The user is an administrator of the system, and I was prompted to and accepted running as Administrator to install the driver. After the install, I went to see the 4050's properties and was prompted that the PLC6 driver was not installed. I believe the PLC6 driver was installed, because the PLC5 driver resulted in receiving an official HP error page indicating the printer was "not set up for collating" as the second page of printing two copies of a one page email. This problem did not occur with the PLC 6 driver. Oddly enough, setting back to PLC5 produced the same error about the PLC5 driver not being installed. I ignored/dismissed the error box (did not re-install the driver), and reproduced the error, with the second page being the HP not set up for collating error page. Any thoughts on what is causing this and how to clear it would be appreciated. The closest fix I could find was on a Microsoft tech page, and they had me clear winsock out of a Administrator run command line, followed by a reboot. That did not fix the problem. I have also found this http://social.technet.microsoft.com/Forums/windowsserver/en-US/5101195b-3aca-4699-9a06-db4578614e2d/changing-driver-results-in-printer-driver-is-not-installed-error-on-server-2008?forum=winserverprint and will look into trying some of these suggestions, which appear to me to be a "shotgun" approach to fixing the problem.

    Read the article

  • mkfs Operation Takes Very Long on Linux Software Raid 5

    - by Elmar Weber
    I've set-up a Linux software raid level 5 consisting of 4 * 2 TB disks. The disk array was created with a 64k stripe size and no other configuration parameters. After the initial rebuild I tried to create a filesystem and this step takes very long (about half an hour or more). I tried to create an xfs and ext3 filesystem, both took a long time, with mkfs.ext3 I observed the following behaviour, which might be helpful: writing inode tables runs fast until it reaches 1053 (~ 1 second), then it writes about 50, waits for two seconds, then the next 50 are written (according to the console display) when I try to cancel the operation with Control+C it hangs for half a minute before it is really canceled The performance of the disks individually is very good, I've run bonnie++ on each one separately with write / read values of around 95 / 110MB/s. Even when I run bonnie++ on every drive in parallel the values are only reduced by about 10 MB. So I'm excluding hardware / I/O scheduling in general as a problem source. I tried different configuration parameters for stripe_cache_size and readahead size without success, but I don't think they are that relevant for the file system creation operation. The server details: Linux server 2.6.35-27-generic #48-Ubuntu SMP x86_64 GNU/Linux mdadm - v2.6.7.1 Does anyone has a suggestion on how to further debug this?

    Read the article

  • Building vs buying a server for an academic lab [closed]

    - by Roy
    I'm looking for advice on the classic build vs buy question. We need a new linux server to run Matlab computation on in our lab (academic). Matlab parallel computing toolbox licence allows up to 12 local workers so we are aiming at a 12 core server with 4GB memory per core (total of 48gb). The system will have an SSD for the OS and a raid-5 (4x2tb) for data. I looked around and found a (relatively) cheap vendor, Silicon Mechanics, that offers a system to our liking (specs below) for $6732. However, buying the components from newegg cost only $4464! The difference is $2268 which is 50% of the base cost. If buying from a company can be thought of as a sort of insurance, basically my premiums are of 50% of the base cost which to me sounds like a lot. Of course any downtime is bad, but the work is not "mission critical", i.e. if it takes a few days to fix it when it breaks its no the end of the world. If it takes weeks to months then its a problem. If it breaks 2-3 times in 3 years, not too bad. If it breaks every month not good. In term of build experience, I set up a linux cluster in grad school (from existing computers) and I build my home pcs but I never built a server before. The server components I'm thinking about: 1 x SUPERMICRO SYS-7046T-6F 4U Tower Server Barebone Dual LGA 1366 Intel 5520 DDR3 1333/1066/800 ($1,050) 12 x Kingston 4GB 240-Pin DDR3 SDRAM DDR3 1333 (PC3 10600) ECC Unbuffered Server Memory ($420) 2 x Intel Xeon E5645 Westmere-EP 2.4GHz LGA 1366 80W Six-Core ($1,116) 4 x Seagate Constellation ES 2TB 7200 RPM SATA 6.0Gb/s 3.5" ($1,040) 1 x SAMSUNG Internal DVD Writer Black SATA ($20) 1 x Intel 520 Series 2.5" 180GB SATA III MLC SSD $300 1 x LSI LSI00281 PCI-Express 2.0 x8 MD2 Low profile SATA / SAS MegaRAID SAS 9260CV-4i Controller Card, $695

    Read the article

  • Best way to build / implement a corporate developer Linux distro with multiple kernels?

    - by Garen
    At work we have Linux users who understandably prefer using Ubuntu. Problem is, we also have developer tools that only work with 'officially' supported Linux distributions that use much older 2.6.18 based kernels. (And even if they worked with newer ones, the vendors could always say they won't "support" the software unless it's on one of their 'officially' supported platforms.) We could of course just tell them to use CentOS or something else 2.6.18-based, and I'm sure their response would be something like: "you can take Ubuntu from our cold, dead hands." :) Which brings to me some questions--is there any good/easy/recommended way to run something like Ubuntu as a host VM and Centos 5.x as a guest OS (with which system--Xen,KVM,VMWare, ...?), and then roll that into our own custom internal distribution that could be easily installed? KVM looks like a good high-performance option just recently included in RHEL 5.4, but if hardware support for virtualization like Intel-VT or AMD-V is necessary, then I'd guess only those folks with fairly new PCs will be able to do it. Would be very interested to hear how anyone else has addressed this kind issue. EDIT: The target audience / users of this kind of system would be developers, each one needs to run locally licensed commercial software, so building out some separate beefy central machines isn't an option unfortunately due to license restrictions. Even if that weren't the case, a couple developers could quickly eat up the resources with parallel builds. :) Ideally, I was hoping there was some step-by-step guide out there to build your own pre-built distribution that had e.g. CentOS 5.x and Ubuntu Desktop as a guest.

    Read the article

  • DD-WRT Acces Point as a Router

    - by Dzh
    Following suggestion to this question asked on Network Engineering, I am asking the question here. this is an extension to my previous question (I think it was deleted), where I was claiming that DDWRT was disabling it's DHCP server once connected to the network. I was wrong, as it now seems that it is bridging itself with another parallel connected wireless router. I have two Draytek 2820 and one Netgear WG602v3 with latest DDWRT. Lets call one wired-Draytek and it has wireless disabled. The other one, let's call it wireless-Draytek, is connected to wired-Draytek and has wireless with MAC filtering enabled. Once I connect Netgear to the wired-Draytek, the client that connects to Netgear, will be assigned with IP address from the wireless-Draytek. If the MAC address is not on the wireles-Draytek, the client is unable to obtain IP address and has no connectivity at all, even with manually assigned static IP configuration. To illustrate further, this is how network is set up: wired-Draytek ---------- wireless-Draytek \_________ Netgear What I wish to have, is that Netgear issues IP addresses from it's own IP pool and ignores the MAC filtering rules from wireless-Draytek. This is kind of puzzling how this they are bridging (if they are) themselves automatically. Thanks. UPDATE: It's not a home network. I gave you a bit simplified set-up. If there is a better site on Stack Exchange to ask this, please let me know. The Drayteks are running stock firmware, it's only Netgear that I've flashed to get more stability. In addition to these routers, I have also three 3COM Baseline switch 2824, and another Draytek router with Prosafe FS752TP PoE switch dedicated for VoIP phones. Wired-Draytek has IP 10.0.0.1, DHCP disabled as there is AD DC which is issuing IP addresses. Wireless-Draytek has IP 1.1.1.1 and DHCP enabled. Netgear has default - 192.168.1.1. As per suggestion, the specific question is - how do I isolate these two wireless routers?

    Read the article

  • Can I boot up a virtual machine natively?

    - by Anshul
    My question is: Is is possible to run a virtual machine natively on your hardware if you have installed the proper drivers etc? In other words, can I use a VHD as a regular hard drive to boot from? The reason I want to do this is that I do both graphics-intensive and audio-intensive work, but my computer is not powerful enough to handle both at the same time and many times I install a bunch of audio programs that I don't want affecting the stability of my graphics programs. Basically I wanted to have sandboxing between the two sets of applications. So I tried running the graphics-intensive programs in a VirtualBox VM and the audio-intensive work natively (simply because it's a pain to route ASIO audio devices in/out of VirtualBox). This kind-of works - the graphics-intensive stuff is tolerable, but still relatively slow, because it's running inside a VM. So my next idea was to just dual-boot and install the graphics and audio programs in separate partitions but I frequently use them in tandem, so it wouldn't be practical to reboot my machine every time I need to use the other set of programs. But I could live with this scenario: If I need to do more audio-intensive stuff, I'll just boot up to the audio partition and run the graphics programs in a VM, and then when I'm working heavily on the graphics part, I'll just boot the graphics partition as a regular OS directly on the hardware. Is this possible? For example by booting up a VHD as a regular hard drive? Or by setting up dual-boot, and every time the audio partition is shut down, synchronize the graphics VM VHD with the native graphics partition? Is it practical, given the above scenario? And if it's not possible, barring buying another computer, can anyone suggest a best-of-all-worlds setup (the two worlds being performance, sandboxing, and running in parallel) for the above scenario? Thanks in advance.

    Read the article

  • What's throttling the database?

    - by Troels Arvin
    Hardware: Intel x86_64 with 192GB of RAM. OS: CentOS 5.4 x86_64. DBMS: DB2 v. 9.7.1 64 bit. During certain special workloads (e.g. parallel REORGs/RUNSTATs), I've seen the server transporting 450MB/s with 25000IO/s (yes, there is probably some storage system caching happening here) while all CPU cores were happily working in an even mix of usermode/wait. And disk benchmark tools can also bring some very satisfying bandwith and IO/s numbers to the table. On the other hand, we also have another scenario: A single rather complex query with at least one large table scan. db2's "list applications" reports that the query is Executing (not locked). IO: At most 10MB/s, 500 IO/s; CPU: two cores in 99.9% wait state, all other cores 100% idle. The tables which the query reads from have been altered to have LOCKSIZE=TABLE, so I would think that lock list work is zero. What's going on in such a situation? What tools/snapshots/... can I use to gain better insight in such a case?

    Read the article

  • PHP / SSH2 Multi-threading

    - by Asad Moeen
    I'm basically done using SSH2 with PHP. Some may already that while using it, the PHP code actually waits for all the listed commands to be executed in SSH and when everything is done, it then gives back the results. Where that is fine for the work I am doing, but I need some commands to be multi-threaded. $cmd= MyCommand; echo $ssh-exec($cmd); So I just want this to run in parallel 2 times. I googled some stuff but didn't get along with it. For a basic thing, I came across to this way posted by someone but it didn't work out for me. for ($i = 0; $i < 2; $i += 1) { exec("php test_1.php $i > test.txt &"); //this will execute test_1.php and will leave this process executing in the background and will go to next iteration of the loop immediately without waiting the completion of the script in the test_1.php , $i is passed as argument . } I tried to put it this way exec("echo $ssh-exec($cmd) $i test.txt &"); in the loop but either it never entered the loop or the echo $ssh-exec failed. I don't really need a very neat multi-threading. Even a single second delay would do good, thank you.

    Read the article

  • VGA Cable gave me an Electric Shock when connecting a Projector to a Laptop

    - by Felipe Luarte
    Yesterday I was trying to connect a Viewsonic Projector to my Samsung RC420 laptop... I followed this steps: I Plugged the Projector to a power source, in parallel I did the same thing with the laptop. Then I turned on the laptop, but not the Projector (this one turns a bright light when is connected to a power source). I put the VGA Cable to the Projector (this one being still turned down) and then when I was getting close to the VGA port of the Laptop... BAM! A big spark appear between the port and the cable! Immediately the whole electricity of my floor went off. A part of the port in my laptop kind of... melted down. It seems to be where the spark started. The same thing happened to the part of the cable involved. Now I'm using the laptop, and it seems that there is no problem in it, I haven't connect anything to the VGA port yet. And the projector is still working to, well... it turns on, and I haven't connect anything yet too. The projector was connected to the power source by a homemade electric extension.

    Read the article

  • using i7 "gamer" cpu in a HPC cluster

    - by user1219721
    I'm running WRF weather model. That's a ram intensive, highly parallel application. I need to build a HPC cluster for that. I use 10GB infiniband interconnect. WRF doesn't depends of core count, but on memory bandwidth. That's why a core i7 3820 or 3930K performs better than high-grade xeons E5-2600 or E7 Seems like universities uses xeon E5-2670 for WRF. It costs about $1500. Spec2006 fp_rates WRF bench shows $580 i7 3930K performs the same with 1600MHz RAM. What's interesting is that i7 can handle up to 2400MHz ram, doing a great performance increase for WRF. Then it really outperforms the xeon. Power comsumption is a bit higher, but still less than 20€ a year. Even including additional part I'll need (PSU, infiniband, case), the i7 way is still 700 €/cpu cheaper than Xeon. So, is it ok to use "gamer" hardware in a HPC cluster ? or should I do it pro with xeon ? (This is not a critical application. I can handle downtime. I think I don't need ECC?)

    Read the article

  • SSD as primary or secondary drive on a small Linux server?

    - by Alex Martelli
    I'm pensioning off my 10-years-old home server and replacing it with an Ubuntu 10.04 box. The two storage devices are a Western Digital Caviar Green 2.0TB HD and an Intel X25-M 34nm Gen 2 80GB SATA II 2.5inch SSD (the box has 8GB RAM and an i5 750, if it matters). I don't care much about boot times (since I don't plan to reboot all that often;-); the main frequent, performance-demanding task will be (re)building large open source C or C++ software packages from sources (as an open source contributor, I do that often). So, I thought I'd keep the SSD as the secondary drive and the HD as the primary one, using the SSD mostly for the files that can otherwise demand a lot of seeking (esp. in a parallel make). However, the friendly vendor (perhaps more experienced in Windows systems than in Linux ones) thinks the "normal" way to configure the machine would be with the SSD as the primary drive. I'm pretty rusty on configuring and tuning systems, so, I thought I'd better double check on SuperUser... thanks in advance for advice about this choice!

    Read the article

  • (Win7) Gets stuck with ~1% CPU. Especially with multithreading

    - by meow
    Windows 7 32 bit, up to date, Intel i7 860. (For some reason the company runs 32bit Windows everywhere.) I tried to update all motherboard drivers etc. as far as possible. I have a performance issue with a machine which appears in connection with multithreading (or so I think). As an example (and where I most often see it, but it appears on other programs as well): ProteoWizard is a file conversion tool for mass spectrometry files. I can add a list of files and it will attempt to process up to 8 files in parallel (quadcore x 2 threads/core). If I choose 1 to 6 files, I start the process and it goes straight through. If I have =7 files in the queue, conversion goes to ~20%, then gets stuck for 15 seconds, then continues again, always in "chunks" of a few % before getting stuck again. During the time the process is stuck, CPU is at 1%. RAM is not limiting, it is maybe at 70% or so and not going up. I don't get the same problem on other, even slower machines. The computer gets also stuck at 1% CPU doing nothing on other occasions, but for multithreading it is most frequent. Where should I look for the problem?

    Read the article

  • Is it possible to record a screen-video from a VNC server?

    - by nikie
    I have a computer that's running VNC server. I would like to record a video of what's going on on this computer, if possible without installing additional software on that computer. Is there a program that can connect to the VNC server port and instead of displaying the screen save it to an (e.g. AVI) video file? Background: One of our customers sometimes has problems with the software he bought from us when he's performing a complex procedure. To help him, we offered that someone (a service technician or programmer) watches what he's doing during that procedure to find out if he's doing something wrong or if there's a bug in the software. Currently, this is done live via VNC. That has a few disadvantages: The service technician has to be in the office at the time. As the customers are in different time zones, that can be in the middle of the night. If the service technician forgets something or doesn't notice something, it's lost. There's no way to see what happened again. Only a single computer can be watched by one service technician at a time. I know I could install normal screen-grab software on the computer, but we're talking about an embedded system with limited RAM, CPU, HDD space, so installing something new is not an easy decision. And VNC is already there. I could of course open a VNC client on some office PC and capture that PC's screen, but I can only record one remote computer that way. I often have to watch up to 8 screens in parallel. (And I don't think that screen-grabbing VNC would improve image quality, either.)

    Read the article

  • How to automatically start VM created by virt-manager?

    - by Jeff Shattock
    I have created a virtual machine with virt-manager that runs on kvm/qemu. The machine works well when started through virt-manager. However, I would like to be able to start and stop the VM through a script in init.d, so that it comes up and down along with the host. I need to have virt-manager show that the machine is running, and to be able to connect to its console through there. When I use the command line that is produced by running ps -eaf | grep kvm after starting the vm through virt-manager, I get some console messages about redirected character devices, but the machine does start and runs properly. However, I do not get any indication from virt-manager that it has started. How can I modify the command line to get virt-manager to pick up the running VM? Is there anything else about the command line that should change when starting outside of virt-manager? Command line is (slightly reformatted for readability): /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 512 -smp 1 -name BORON \ -uuid fa7e5fbd-7d8e-43c4-ebd9-1504a4383eb1 \ -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/BORON.monitor,server,nowait \ -monitor chardev:monitor -localtime -boot c \ -drive file=/dev/FS1/BORON,if=ide,index=0,boot=on,format=raw \ -net nic,macaddr=52:54:00:20:0b:fd,vlan=0,name=nic.0 \ -net tap,fd=41,vlan=0,name=tap.0 -chardev pty,id=serial0 -serial chardev:serial0 \ -parallel none -usb -usbdevice tablet -vnc 127.0.0.1:1 -k en-us -vga cirrus

    Read the article

  • Unix sort 10x slower with keys specified

    - by KenFar
    My data: It's a 71 MB file with 1.5 million rows. It has 6 fields, four of which are strings of avg. 15 characters, two are integers. Three of the fields are sometimes empty. All six fields combine to form a unique key - and that's what I need to sort on. Sort statement: sort -t ',' -k1,1 -k2,2 -k3,3 -k4,4 -k5,5 -k6,6 -o a_out.csv a_in.csv The problem: If I sort without keys, it takes 30 seconds. If I sort with keys, it takes 660 seconds. I need to sort with keys to keep this generic and useful for other files that have non-key fields as well. The 30 second timing is fine, but the 660 is a killer. I could theoretically move the temp directory to SSD, and/or split the file into 4 parts, sort them separately (in parallel) then merge the results, etc. But I'm hoping for something simpler since these results are so bad as-is. Any suggestions?

    Read the article

  • Apache /server-status/ gives a 404 not found

    - by user57069
    I am trying to solve a problem where Apache stats aren't displaying correctly in Munin. I've ran through quite a bit of checks and tests regarding Munin setup, but I think my issue is related to Apache, but my skill set there is lacking. first, system info: monitored server CentOS 5.3 kernel 2.6.18-128.1.1.el5 Apache/2.2.3 "server-status" directive in httpd.conf (i've cross-compared this with another system that i did a successful parallel install of Munin on, correctly showing Apache stats, and the directive below is the same for both) ExtendedStatus On <Location /server-status> SetHandler server-status Order deny,allow Deny from all Allow from 127.0.0.1 </Location> ran lynx http://localhost/server-status got HTTP/1.1 404 taking a look at Apache access_log: 127.0.0.1 - - [13/Oct/2010:07:00:47 -0700] "GET /server-status HTTP/1.0" 404 11237 "-" "Lynx/2.8.5rel.1 libwww-FM/2.14 SSL-MM/1.4.1 OpenSSL/0.9.8e-fips-rhel5" mod_status is also loaded: % grep "mod_status" /etc/httpd/conf/httpd.conf LoadModule status_module modules/mod_status.so iptables is turned off also i did notice that the ownership status on httpd.conf on this system is root.root.. whereas the system that is displaying correctly is apache.www -- not certain that this matters?? its got to be permission issue, but i'm not certain where the permissions are messed up. any thoughts on why the test of server-status is giving me a 404?

    Read the article

  • What are the practical differences between an IP address and a server?

    - by JMC Creative
    My understanding of IPs and other DNS-type server-related issues really falls short (read: exteme noob). I know a dedicated server would increase speed. What, if any, difference in speed would a dedicated IP make? Am I correct in understanding the Best Practices from Yahoo that I could use the second IP to serve up some content, which would increase the number of parallel downloads for the user? Or are both IPs (purchase from same hosting account) going to point to the same server? Or how does it work? Are there other optimization things I should be aware of when thinking of purchasing a dedicated IP? Clarification I am talking about the speed of serving the webpages, i.e. the speed of my website. Yes, I know that IP and server are completely different, not even opposites, just different. But this, indeed, is my question! The Question Reformulated: Will having a second (dedicated) IP on my website speed up the time that it will load and display for the user? Or does that have nothing at all to do with IP, and is only a server issue? I'm sorry if this is still unclear. This is a real question though, I may just not be wording it well.

    Read the article

  • Find out what fonts are being sent to a printer

    - by user38307
    I have an issue where two computers running XP and with identical print drivers have different behavior printing over parallel port to receipt printers. For one type of receipt, receipt printing is instant. For another kind printing is delayed by ten seconds on most machines but not on the other. This happens even if I swap out printers. I believe the delay is because this computer has a different set of fonts installed. (It is used for graphic design.) The printers have built-in fonts, and if you do not use one of the built-in fonts the printer has to build up an image in memory rather than just spitting out its fonts. For a particular kind of receipt with special fonts on a particular computer the computer is sending a font which the receipt printer does not have built in. My question is, is there a way to find out what fonts are being sent to the printer? This would let me narrow down what I need to modify in the Windows font folder. Thank you!

    Read the article

  • Incoming traffic while on public network

    - by zvikico
    I'm developing a web app and I need to be able to get incoming traffic from 3rd party services I use. This is a classic webhooks situation: I send a request with a return address and receive the response (via HTTP) some time later to the given address. The simple solution would be to provide my external IP address and forward the incoming traffic from the router to my machine. However, I'm working in a large office and I cannot control the router configuration. I'm looking for a different way to achieve that. I do have servers online. I can have a daemon running on one of those servers, which will handle the incoming traffic. I can run a parallel daemon on my machine, which will keep an open connection with the remote daemon (over ssh preferred) and when an inbound traffic is received by the remote, it will send it to the local, which will send it to the correct port on my machine, as if it was received in the natural way. Is there any ready-made solution for that? PS. I'm on OS X and my server is Ubuntu. Thanks, zvikico

    Read the article

  • how to export VARs from a subshell to a parent shell?

    - by webwesen
    I have a Korn shell script #!/bin/ksh # set the right ENV case $INPUT in abc) export BIN=${ABC_BIN} ;; def) export BIN=${DEF_BIN} ;; *) export BIN=${BASE_BIN} ;; esac # exit 0 <- bad idea for sourcing the file now these VARs are export'ed only in a subshell, but I want them to be set in my parent shell as well, so when I am at the prompt those vars are still set correctly. I know about . .myscript.sh but is there a way to do it without 'sourcing'? as my users often forget to 'source'. EDIT1: removing the "exit 0" part - this was just me typing without thinking first EDIT2: to add more detail on why do i need this: my developers write code for (for simplicity sake) 2 apps : ABC & DEF. every app is run in production by separate users usrabc and usrdef, hence have setup their $BIN, $CFG, $ORA_HOME, whatever - specific to their apps. so ABC's $BIN = /opt/abc/bin # $ABC_BIN in the above script DEF's $BIN = /opt/def/bin # $DEF_BIN etc. now, on the dev box developers can develop both ABC and DEF at the same time under their own user account 'justin_case', and I make them source the file (above) so that they can switch their ENV var settings back and forth. ($BIN should point to $ABC_BIN at one time and then I need to switch to $BIN=$DEF_BIN) now, the script should also create new sandboxes for parallel development of the same app, etc. this makes me to do it interactively, asking for sandbox name, etc. /home/justin_case/sandbox_abc_beta2 /home/justin_case/sandbox_abc_r1 /home/justin_case/sandbox_def_r1 the other option i have considered is writing aliases and add them to every users' profile alias 'setup_env=. .myscript.sh' and run it with setup_env parameter1 ... parameterX this makes more sense to me now

    Read the article

  • How to stop an IOException error using whilst using a combination of jython, pyro and ant?

    - by Kelso
    So the wonderful low down on this doozie of a problem: short version: We are building a distribution system for this item of software we're using. Basically we take out build artifact, store it on an ftp server which passes it to multiple clients which execute scripts to patch their servers. Long version: 1 distribution server multiple client servers software: jython 2.5.1, ant 1.8.0, pyro 3.10 The distribution server has an FTP server and a PYRO client running on it. Each client server has a PRYO server running on it. When the PYRO client is told to start the patch procedure then it reads a machine list which contains a list of all the client servers. Then connects to each of the PYRO servers one by one and execute the patch procedure. The procedure is: getPatch (gets the latest patch for that server), StopServer (stops the software that may or maynot be accessing what needs to be patched), Apply patch, StartServer. Each of the processes calls an ANT script that passes with some folder names and other config passes around. The fun part happens when you go to apply the patch. See below for error log. I had to remove the folder names because of NDA reasons. This is where it gets interesting. Running each section of the procedure individually. i.e. running getPatch, StopServer, etc. one at a time manually. This bug doesn't happen. Physically goign to the machine and running the processes it doesn't happen. Only when we call all 4 of the processes one after the other. It occurs during the ApplyPatch phase when an ANT replace script is called on multiple files. We think it might have something to do with the JVM keeping hold of the file for a split second or 2. however this is meant to have been patched according to the bug notes on ant. so in short: distribution server == jython == pyro connection == client server == jython == ant script Error Log: <*snip>\ant\deploy.xml:12: IOException in <*snip>\bin\startGs.sh - java.io.IOException:Failed to delete <*snip>\bin\rep4698373081723114968.tmp while trying to rename it. at org.apache.tools.ant.taskdefs.Replace.processFile(Replace.java:709) at org.apache.tools.ant.taskdefs.Replace.execute(Replace.java:548) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.taskdefs.Sequential.execute(Sequential.java:68) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.taskdefs.MacroInstance.execute(MacroInstance.java:398) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:390) at org.apache.tools.ant.Target.performTasks(Target.java:411) at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1360) at org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38) at org.apache.tools.ant.Project.executeTargets(Project.java:1212) at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:441) at org.apache.tools.ant.taskdefs.SubAnt.execute(SubAnt.java:302) at org.apache.tools.ant.taskdefs.SubAnt.execute(SubAnt.java:221) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:390) at org.apache.tools.ant.Target.performTasks(Target.java:411) at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1360) at org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38) at org.apache.tools.ant.Project.executeTargets(Project.java:1212) at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:441) at org.apache.tools.ant.Extaskdefs.SubAnt.execute(SubAnt.java:302) at org.apache.tools.ant.taskdefs.SubAnt.execute(SubAnt.java:221) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.taskdefs.Sequential.execute(Sequential.java:68) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.taskdefs.MacroInstance.execute(MacroInstance.java:398) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.taskdefs.Sequential.execute(Sequential.java:68) at net.sf.antcontrib.logic.IfTask.execute(IfTask.java:197) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.TaskAdapter.execute(TaskAdapter.java:154) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:390) at org.apache.tools.ant.Target.performTasks(Target.java:411) at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1360) at org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38) at org.apache.tools.ant.Project.executeTargets(Project.java:1212) at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:441) at org.apache.tools.ant.taskdefs.SubAnt.execute(SubAnt.java:302) at org.apache.tools.ant.taskdefs.SubAnt.execute(SubAnt.java:221) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.taskdefs.Sequential.execute(Sequential.java:68) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) it at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.taskdefs.MacroInstance.execute(MacroInstance.java:398) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.taskdefs.Sequential.execute(Sequential.java:68) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.taskdefs.MacroInstance.execute(MacroInstance.java:398) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.taskdefs.Parallel$TaskRunnable.run(Parallel.java:433) at java.lang.Thread.run(Thread.java:619) Caused by: java.io.IOException: Failed to delete <*snip\bin\rep4698373081723114968.tmp while trying to rename it. at org.apache.tools.ant.util.FileUtils.rename(FileUtils.java:1248) at org.apache.tools.ant.taskdefs.Replace.processFile(Replace.java:702) ... 125 more Any help would be appreciated.

    Read the article

  • Asynchronous Webcrawling F#, something wrong ?

    - by jlezard
    Not quite sure if it is ok to do this but, my question is: Is there something wrong with my code ? It doesn't go as fast as I would like, and since I am using lots of async workflows maybe I am doing something wrong. The goal here is to build something that can crawl 20 000 pages in less than an hour. open System open System.Text open System.Net open System.IO open System.Text.RegularExpressions open System.Collections.Generic open System.ComponentModel open Microsoft.FSharp open System.Threading //This is the Parallel.Fs file type ComparableUri ( uri: string ) = inherit System.Uri( uri ) let elts (uri:System.Uri) = uri.Scheme, uri.Host, uri.Port, uri.Segments interface System.IComparable with member this.CompareTo( uri2 ) = compare (elts this) (elts(uri2 :?> ComparableUri)) override this.Equals(uri2) = compare this (uri2 :?> ComparableUri ) = 0 override this.GetHashCode() = 0 ///////////////////////////////////////////////Funtions to retreive html string////////////////////////////// let mutable error = Set.empty<ComparableUri> let mutable visited = Set.empty<ComparableUri> let getHtmlPrimitiveAsyncDelay (delay:int) (uri : ComparableUri) = async{ try let req = (WebRequest.Create(uri)) :?> HttpWebRequest // 'use' is equivalent to ‘using’ in C# for an IDisposable req.UserAgent<-"Mozilla" //Console.WriteLine("Waiting") do! Async.Sleep(delay * 250) let! resp = (req.AsyncGetResponse()) Console.WriteLine(uri.AbsoluteUri+" got response after delay "+string delay) use stream = resp.GetResponseStream() use reader = new StreamReader(stream) let html = reader.ReadToEnd() return html with | _ as ex -> Console.WriteLine( ex.ToString() ) lock error (fun () -> error<- error.Add uri ) lock visited (fun () -> visited<-visited.Add uri ) return "BadUri" } ///////////////////////////////////////////////Active Pattern Matching to retreive href////////////////////////////// let (|Matches|_|) (pat:string) (inp:string) = let m = Regex.Matches(inp, pat) // Note the List.tl, since the first group is always the entirety of the matched string. if m.Count > 0 then Some (List.tail [ for g in m -> g.Value ]) else None let (|Match|_|) (pat:string) (inp:string) = let m = Regex.Match(inp, pat) // Note the List.tl, since the first group is always the entirety of the matched string. if m.Success then Some (List.tail [ for g in m.Groups -> g.Value ]) else None ///////////////////////////////////////////////Find Bad href////////////////////////////// let isEmail (link:string) = link.Contains("@") let isMailto (link:string) = if Seq.length link >=6 then link.[0..5] = "mailto" else false let isJavascript (link:string) = if Seq.length link >=10 then link.[0..9] = "javascript" else false let isBadUri (link:string) = link="BadUri" let isEmptyHttp (link:string) = link="http://" let isFile (link:string)= if Seq.length link >=6 then link.[0..5] = "file:/" else false let containsPipe (link:string) = link.Contains("|") let isAdLink (link:string) = if Seq.length link >=6 then link.[0..5] = "adlink" elif Seq.length link >=9 then link.[0..8] = "http://adLink" else false ///////////////////////////////////////////////Find Bad href////////////////////////////// let getHref (htmlString:string) = let urlPat = "href=\"([^\"]+)" match htmlString with | Matches urlPat urls -> urls |> List.map( fun href -> match href with | Match (urlPat) (link::[]) -> link | _ -> failwith "The href was not in correct format, there was more than one match" ) | _ -> Console.WriteLine( "No links for this page" );[] |> List.filter( fun link -> not(isEmail link) ) |> List.filter( fun link -> not(isMailto link) ) |> List.filter( fun link -> not(isJavascript link) ) |> List.filter( fun link -> not(isBadUri link) ) |> List.filter( fun link -> not(isEmptyHttp link) ) |> List.filter( fun link -> not(isFile link) ) |> List.filter( fun link -> not(containsPipe link) ) |> List.filter( fun link -> not(isAdLink link) ) let treatAjax (href:System.Uri) = let link = href.ToString() let firstPart = (link.Split([|"#"|],System.StringSplitOptions.None)).[0] new Uri(firstPart) //only follow pages with certain extnsion or ones with no exensions let followHref (href:System.Uri) = let valid2 = set[".py"] let valid3 = set[".php";".htm";".asp"] let valid4 = set[".php3";".php4";".php5";".html";".aspx"] let arrLength = href.Segments |> Array.length let lastExtension = (href.Segments).[arrLength-1] let lengthLastExtension = Seq.length lastExtension if (lengthLastExtension <= 3) then not( lastExtension.Contains(".") ) else //test for the 2 case let last4 = lastExtension.[(lengthLastExtension-1)-3..(lengthLastExtension-1)] let isValid2 = valid2|>Seq.exists(fun validEnd -> last4.EndsWith( validEnd) ) if isValid2 then true else if lengthLastExtension <= 4 then not( last4.Contains(".") ) else let last5 = lastExtension.[(lengthLastExtension-1)-4..(lengthLastExtension-1)] let isValid3 = valid3|>Seq.exists(fun validEnd -> last5.EndsWith( validEnd) ) if isValid3 then true else if lengthLastExtension <= 5 then not( last5.Contains(".") ) else let last6 = lastExtension.[(lengthLastExtension-1)-5..(lengthLastExtension-1)] let isValid4 = valid4|>Seq.exists(fun validEnd -> last6.EndsWith( validEnd) ) if isValid4 then true else not( last6.Contains(".") ) && not(lastExtension.[0..5] = "mailto") //Create the correct links / -> add the homepage , make them a comparabel Uri let hrefLinksToUri ( uri:ComparableUri ) (hrefLinks:string list) = hrefLinks |> List.map( fun link -> try if Seq.length link <4 then Some(new Uri( uri, link )) else if link.[0..3] = "http" then Some(new Uri(link)) else Some(new Uri( uri, link )) with | _ as ex -> Console.WriteLine(link); lock error (fun () ->error<-error.Add uri) None ) |> List.filter( fun link -> link.IsSome ) |> List.map( fun o -> o.Value) |> List.map( fun uri -> new ComparableUri( string uri ) ) //Treat uri , removing ajax last part , and only following links specified b Benoit let linksToFollow (hrefUris:ComparableUri list) = hrefUris |>List.map( treatAjax ) |>List.filter( fun link -> followHref link ) |>List.map( fun uri -> new ComparableUri( string uri ) ) |>Set.ofList let needToVisit uri = ( lock visited (fun () -> not( visited.Contains uri) ) ) && (lock error (fun () -> not( error.Contains uri) )) let getLinksToFollowAsyncDelay (delay:int) ( uri: ComparableUri ) = async{ let! links = getHtmlPrimitiveAsyncDelay delay uri lock visited (fun () ->visited<-visited.Add uri) let linksToFollow = getHref links |> hrefLinksToUri uri |> linksToFollow |> Set.filter( needToVisit ) |> Set.map( fun link -> if uri.Authority=link.Authority then link else link ) return linksToFollow } //Add delays if visitng same authority let getDelay(uri:ComparableUri) (authorityDelay:Dictionary<string,int>) = let uriAuthority = uri.Authority let hasAuthority,delay = authorityDelay.TryGetValue(uriAuthority) if hasAuthority then authorityDelay.[uriAuthority] <-delay+1 delay else authorityDelay.Add(uriAuthority,1) 0 let rec getLinksToFollowFromSetAsync maxIteration ( uris: seq<ComparableUri> ) = let authorityDelay = Dictionary<string,int>() if maxIteration = 100 then Console.WriteLine("Finished") else //Unite by authority add delay for those we same authority others ignore let stopwatch= System.Diagnostics.Stopwatch() stopwatch.Start() let newLinks = uris |> Seq.map( fun uri -> let delay = lock authorityDelay (fun () -> getDelay uri authorityDelay ) getLinksToFollowAsyncDelay delay uri ) |> Async.Parallel |> Async.RunSynchronously |> Seq.concat stopwatch.Stop() Console.WriteLine("\n\n\n\n\n\n\nTimeElapse : "+string stopwatch.Elapsed+"\n\n\n\n\n\n\n\n\n") getLinksToFollowFromSetAsync (maxIteration+1) newLinks getLinksToFollowFromSetAsync 0 (seq[ComparableUri( "http://twitter.com/" )]) Console.WriteLine("Finished") Some feedBack would be great ! Thank you (note this is just something I am doing for fun)

    Read the article

< Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >