Search Results

Search found 8429 results on 338 pages for 'batch processing'.

Page 39/338 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • OS X Snow Leopard stops processing clicks from Wacom Intuous2

    - by antiver
    I'm using a Wacom Intuos2 graphics tablet in OS X Snow Leopard 10.6.2, which works great, 80% of the time. Every 20 minutes or so, OS X to just start ignoring input from the pen. More specifically, only the pen tip is ignored - the side buttons on the pen still work. Putting the Mac to sleep and waking it back up restores functionality to the pen tip. This is using both of the latest Wacom drivers claiming compatibility with Snow Leopard: versions 6.1.2-5 (Nov 25, 2009) and 6.1.3-3 (Jan 21, 2010). I have no experience with this tablet with other version of OS X / drivers. The tablet works 100% in Windows, which leads to blame either OS X or the OS X drivers.

    Read the article

  • Linux: Simulate Serial Connection from Arduino

    - by shanet
    I'm trying to simulate the serial connection from an Arduino into a Processing applet since I don't have an Arduino at the moment. Simply, I'm trying to just send bytes from Bash to a serial connection (on /dev/ttyS0) which the Processing applet will pick up like it would from an Arduino. I tried the answer to this question: How can I send data to the serial port from a Linux shell?, but it's simply not working and I don't know how to go about debugging something like this since I've never played with serial connections before. Any advice? Thanks much.

    Read the article

  • Batch convert HTML file(s) saved using IE to MHT

    - by ultrasawblade
    I have numerous web sites that I've saved over the years. I used Internet Explorer's "Save As..." option to do this. It saves the original page as an .html document, and page requirements in a linked folder with the same name as a document. I want to convert a bunch of these (over 1000) to the single-file .mht format. This can be done through Internet Explorer or Firefox (using UnMHT extension) by loading the original .html document, then re-saving as an .mht document. It is tedious to do that for the number of files I'm talking about, obviously. I'm wondering if anyone knows of a utility, command line or otherwise, that can accomplish this.

    Read the article

  • What is the command to check if a command's results mention OK?

    - by Manuel
    Alright, so I was playing around with changing MTU size and wanted to make a batch file to automatically lower it and then raise it later. This is probably simple, but I just can't figure it out. Point is, is there a way to run a command, which would normally echo out "ok" but check to see if it does say ok? And if it doesn't say ok then, to end the rest of the file from running and exit out. The command I'm using is netsh interface ipv4 set subinterface "Local Area Connection" mtu=386 store=persistent which, as I mentioned above prints out an OK. I just want to check if it did run correctly, and if not, then do __

    Read the article

  • SharePoint Search: processing filenames containing underscores

    - by Todd Owen
    We use SharePoint Server 2007 to allow employees to search network file shares, but it seems that underscores in filenames are not treated as word separators when indexing the files. As a result, a search for chocolate will: match "chocolate milkshake.doc" but not match "chocolate_cake.doc" (Of course, this is a simplified example; in practice the content of the second file might include the word "chocolate" and match on that instead of the filename. But the problem itself is real enough, because a common scenario in a corporate environment is that a user knows the the partial name of the file they are looking for and expects to see matching filenames at the top of the search results. And using underscores in filenames is a widely used convention within our company). Underscores are not treated as word separators in the file content either, although this is less of a concern for us. The root cause of this problem is possibly related to the behaviour of the word breakers that SharePoint uses (i.e. the language-specific DLLs that implement the IWorkBreaker interface), although I haven't confirmed this yet. Does anyone know of a workaround for this issue? I have tested with Search Server 2008 Express too (which is based on the same technology), and it is also affected. I do not know whether the problem is fixed in SharePoint 2010 or not.

    Read the article

  • System 67 error scheduled task to transfer files

    - by grom
    Running directly on command line the batch script works. But when scheduled to run (Windows 2003 R2 server) as the local administrator, I get the following error: D:\ScadaExport\exported>ping 192.168.10.78 Pinging 192.168.10.78 with 32 bytes of data: Reply from 192.168.10.78: bytes=32 time=11ms TTL=61 Reply from 192.168.10.78: bytes=32 time=15ms TTL=61 Reply from 192.168.10.78: bytes=32 time=29ms TTL=61 Reply from 192.168.10.78: bytes=32 time=10ms TTL=61 Ping statistics for 192.168.10.78: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 10ms, Maximum = 29ms, Average = 16ms D:\ScadaExport\exported>net use Z: \\192.168.10.78\bar-pccommon\scada\ System error 67 has occurred. The network name cannot be found. Any ideas? Google is turning up nothing useful, just keep finding results relating to DNS etc, but using IP address here.

    Read the article

  • Group Policy processing and autologon on Windows 7

    - by Jason Berg
    I'm trying to accomplish a few things via Group Policy on Windows 7. Software Installation, map drives, map printer, etc. I've got these computers set to autologon. The problem I'm running into is that the computers logon before DHCP has done its thing. Therefore, they don't apply any group policies properly. How do I fix this? I've already set a policy to "Always wait for the network at computer startup and logon". I've read up a bit and this doesn't actually mean that it waits for DHCP. So it's a little pointless. Anything that would delay logon would work. Or if I can somehow make the computer wait for DHCP.

    Read the article

  • Product Recommendation: Good job scheduler for windows servers?

    - by Bret Fisher
    Looking for a mostly-GUI tool that is low cost (less then $1k, but not required) and allows you to create scheduled tasks and jobs without writing vbscript, batch files, or powershell. Something simple that speaks SMB/CIFS, SMTP, LDAP, etc. for such things as "delete some files based on a list of folders from this text file" or "disable all users with expired accounts" or "delete all disabled users not in this AD group". I've seen some of the big multi-OS enterprise task automation systems and they just look way overkill. We're a windows-only shop, Server 2003 or newer and there's got to be a simple non-agent based product that is drag-n-drop for some of this basic automation. Today we use all three languages mentioned above, and the scripts are not as reliable as a workflow-based-tool would be. Thanks.

    Read the article

  • Troubleshooting a high SQL Server Compilation/Batch-Ratio

    - by Sleepless
    I have a SQL Server (quad core x86, 4GB RAM) that constantly has almost the same values for "SQLServer:SQL Statistics: SQL compilations/sec" and "SQLServer:SQL Statistics: SQL batches/sec". This could be interpreted as a server running 100% ad hoc queries, each one of which has to be recompiled, but this is not the case here. The sys.dm_exec_query_stats DMV lists hundreds of query plans with an execution_count much larger than 1. Does anybody have any idea how to interpret / troubleshoot this phenomenon? BTW, the server's general performance counters (CPU,I/O,RAM) all show very modest utilization.

    Read the article

  • What's the equivalent of Wevtutil in XP or 2003?

    - by Matt
    I have a batch file for saving event logs to our shared drive. I want to do this for XP and Server 2003 without very much effort. What could I use since Wevtutil is only on Vista and up? rem Script start here rem Timestamp Generator set BACKUP_PATH=\\shared-drive\it\Temp\Event-Logs\ rem Parse the date (e.g., Thu 02/28/2013) set cur_yyyy=%date:~10,4% set cur_mm=%date:~4,2% set cur_dd=%date:~7,2% rem Parse the time (e.g., 11:20:56.39) set cur_hh=%time:~0,2% if %cur_hh% lss 10 (set cur_hh=0%time:~1,1%) set cur_nn=%time:~3,2% set cur_ss=%time:~6,2% set cur_ms=%time:~9,2% rem Set the timestamp format set timestamp=%cur_yyyy%%cur_mm%%cur_dd%-%cur_hh%%cur_nn%%cur_ss%%cur_ms% rem Set the computername format set servname=%computername% wevtutil epl System %BACKUP_PATH%\%servname%_%timestamp%_system.evtx wevtutil epl Application %BACKUP_PATH%\%servname%_%timestamp%_application.evtx wevtutil epl Security %BACKUP_PATH%\%servname%_%timestamp%_security.evtx rem End of Script

    Read the article

  • Processing-time billing in Amazon EC2

    - by Rafael Almeida
    Hi all! I think my question is fairly basic, but I would like a clarification: in the Pricing part of AWS we can see that Amazon charges people around .10 by the 'instance computing hour'. I've seen in a blog post somewhere (can't remember where exactly, and even if I did I think it was in Portuguese anyway) that this way your minimum monthly payment would be $72 (= .10 $s/hour x 24 hours x 30 days). Is this correct? (I don't think it is!) In my understanding is that this 'virtual computing time' is only used when your machine is actually doing something (serving pages, serving the admin via ssh, whatever), so real billable usage would be less than 720 hours/month in most webserver scenarios. Is my view correct? If it is, then it leads me to another question: is it economically interesting to buy access to one of these instances for testing? I mean, would I have the 'freedom' to 'forget' about it for a month and receive a very-close-to-zero (as in, a few cents) bill? Do you do it/know of anybody who does? Any thoughts on the matter (as in, "yes, it's a good idea", or "yes, but there's this 'gotcha': ...", or "no, nobody does it because of...")? PS: sorry for the loong question text. I highlighted the main questions for easy view. Also, I'm not sure if this question is actually more than one and if it's desirable for the community, so, sorry if it is too! Thanks in advance!

    Read the article

  • Z-order with Alpha blending in a 3D world

    - by user41765
    I'm working on a game in a 3D world with 2D sprites only (like Don't Starve game). (OpenGL ES2 with C++) Currently, I'm ordering elements back to front before drawing them without batch (so 1 element = 1 drawcall). I would like to implement batching in my framework to decrease draw calls. Here is what I've got for the moment: Order all elements of my scene back to front. Send order list of elements to the Renderer. Renderer look in his batch manager if a batch exist for the given element with his Material. Batch didn't exist: create a new one. Batch exist for element with this Material: Add sprite to the batch. Compute big mesh with all sprite for each batch (1 material type = 1 batch). When all batches are ok, the batch manager compute draw commands for the renderer. Renderer process draw commands (bind shader, bind textures, bind buffers, draw element) Image with my problem here: Explication here But I've got some problems because objects can be behind another objects inside another batch. How can I do something like that? Thanks!

    Read the article

  • Command line raw image processing tools in Linux?

    - by ???
    I'm wondering if there is any command to process raw images, for example, cat raw1.img | raw2jpg -w 640 -h 480 -pitch 1024 -pixelformat R8G8B8 and more examples: cat raw1.img raw2.img >y-merge.img tr='transpose -pitch 1024 -depth 24' cat <(cat raw1.img | $tr) <(cat raw2.img | $tr) | transpose -pitch 480 >x-merge.img and something like this: cat gamebitmap.dat | ( w=`readint32` h=`readint32` raw2png -w $w -h $h -depth 24 -pixelformat R8G8B8 ) | png2svg -extractoutline -fuzzy -error 8 -smooth Seems a little tricky, but is it possible? does ImageMagick support such raw formats?

    Read the article

  • File Sync Solution for Batch Processing (ETL)

    - by KenFar
    I'm looking for a slightly different kind of sync utility - not one designed to keep two directories identical, but rather one intended to keep files flowing from one host to another. The context is a data warehouse that currently has a custom-developed solution that moves 10,000 files a day, some of which are 1+ gbytes gzipped files, between linux servers via ssh. Files are produced by the extract process, then moved to the transform server where a transform daemon is waiting to pick them up. The same process happens between transform & load. Once the files are moved they are typically archived on the source for a week, and the downstream process likewise moves them to temp then archive as it consumes them. So, my requirements & desires: It is never used to refresh updated files - only used to deliver new files. Because it's delivering files to downstream processes - it needs to rename the file once done so that a partial file doesn't get picked up. In order to simplify recovery, it should keep a copy of the source files - but rename them or move them to another directory. If the transfer fails (network down, file system full, permissions, file locked, etc), then it should retry periodically - and never fail in a non-recoverable way, or a way that sends the file twice or never sends the file. Should be able to copy files to 2+ destinations. Should have a consolidated log so that it's easy to find problems Should have an optional checksum feature Any recommendations? Can Unison do this well?

    Read the article

  • ssd firmware, linux: updating large batch of drives

    - by wryfi
    I was recently hit with a fatal firmware bug that affected dozens of Crucial SSDs deployed in my datacenter. Many of the affected machines use LSI or other proprietary SAS controllers, which Crucial's bootable ISO does not recognize. None of the affected machines has a Windows license. The story is roughly similar for other SSD mfrs, including Samsung and Intel. To resolve this issue, I was forced to stop each machine, remove the affected SSD, remove the SSD from its hotswap caddy, install it temporarily into my ThinkPad, flash the firmware, reverse, rinse, repeat. It took the better part of a day to get through all the affected devices. I am looking for hardware, software, and/or purchasing strategies to ease this pain, as SSD firmware bugs seem inevitable, and our SSD footprint is growing. My first thought is to get a laptop with eSATA and one of these cables (http://www.newegg.com/Product/Product.aspx?Item=N82E16812311004). That should at least make it so I don't have to remove the drives from their caddies. Surely others have run into this. Any novel solutions?

    Read the article

  • Nodejs for processing js and Nginx for handling everything else

    - by Kevin Parker
    I am having a nodejs running on port 8000 and nginx on port 80 on same server. I want Nginx to handle all the requests(image,css,etc) and forward js requests to nodejs server on port 8000. Is it possible to achieve this. i have configured nginx as reverse proxy but its forwarding every request to nodejs but i want nginx to process all except js. nginx/sites-enabled/default/ upstream nodejs { server localhost:8000; #nodejs } location / { proxy_pass http://192.168.2.21:8000; proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504; proxy_redirect off; proxy_buffering off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; }

    Read the article

  • Batch renaming 32000 files - splitting into multiple subdirectories

    - by Gareth
    I've got a web server which has files uploaded to it. There is a script which assigns them numeric IDs and stores them in a corresponding subdirectory. I've now got 32000 of these uploads and that's too many for the server to handle in one directory. The script I'm using does have a way to "namespace" uploads so that ID 12345 - instead of sitting in /files/12345 - would sit in /files/namespaced/000/012/345. The code can deal with this just fine, but I now have 32000 subdirectories in the wrong naming style. What's the easiest way to go through my existing files and put them in the right place?

    Read the article

  • SCCM not processing hardware Inventory

    - by Sreekumar
    We have some workstations that will not import hardware inventory into the SCCM 2007 database. What I've done: Client - verified workstation object is discovered in SCCM - installed ccm client on workstation - manually ran hardware inventory action - verifed Inventory report was sent successfully "Successfully sent report Destination:mp:MP_HinvEndpoint, ID ..... - watched file on client enter and exit temp folder on client. - successfully ran MP Spy to verify client communicates with server - uninstalled client, deleted ccm and ccmsetup folders and reinstalled. Server - no entries in MP_Hinv.log file that coorespond with time stamp of workstation - no entries in dataldr.log file that coorespond with time stamp of workstation Where are these files going? All T/S blogs expect entries in these logs. This is driving me crazy.

    Read the article

  • users unable to add registry keys to HKCU

    - by Eds
    I may not have this 100% correct so need some clarification. Are normal users on a 2003 terminal server allowed to add registry keys the their own HKCU section in the registry, or are they only allowed to edit existing ones? The reason I ask is that we have 3 keys that we need to add for each user on login. I thought it would be as simple as having a straightforward batchscript run that silently adds the keys for the user. Here is what I used: regedit.exe "C:\Documents and Settings\All Users\Desktop\example.reg" When the user runs this batch scipt, they see nothing as you would expect, but the keys are not added. If I simply run the .reg file as the user, it asks if I want to add the key, but then has an error saying there was an error accessing the registry. Do I need something a bit more complex to accomplish this task. Many Thanks Eds EDIT: Contents of .reg file Windows Registry Editor Version 5.00 [HKEY_CURRENT_USER\Software\Policies\Microsoft\office\14.0\outlook\Security] "PromptSimpleMAPINameResolve"=dword:00000002 "PromptSimpleMAPIOpenMessage"=dword:00000002 "PromptSimpleMAPISend"=dword:00000002

    Read the article

  • How do I run a batch file async using PSExec?

    - by Paul Mrozowski
    I have a batch file I run that, among other things, reset's the NICs in the machine. I have some watchdog software running on another machine that monitors the first one. I'd like to run this batch file using PSExec when it detects certain types of failures. The problem I'm having is that since the batch file reset's the NIC's it kills the connection PSExec has (I'm OK with that). The real issue is that when PSExec dies the batch file stops running (leaving the NIC's disabled). I've tried using the -i option with PSExec with no luck. Any ideas about basically just fire off the batch file and NOT have it stop when PSExec is disconnected?

    Read the article

  • Red Hat 5.4 slow processing

    - by yucefrizk
    I'm running Red Hat Linux 5.4 on HP DL580 server with 16 processors and 64 GB of RAM. I'm connecting to the server remotely through SSH. after entering the password, it takes time to return the command line, if I click ctrl+c during this time, I'll have the command line prompt but not the correct bash prompt (I have to run bash to pass to my correct prompt). I tried to install Apache on the server, ./configure took 4 hours to finish instead of 1 or two minutes, Oracle installation same behavior. Server Disks are mirrored using RAID controller. any idea what could be the reason of this slowness?

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >