Search Results

Search found 1962 results on 79 pages for 'slightly offtopic'.

Page 49/79 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Terminal emulation has stopped working. Garbage escape chars

    - by oligofren
    To enable me to do some remote administration of our servers I started using a terminal emulation program called TouchTerm Pro on my iPhone. While not the smoothest experience, it has allowed me to leave my computer behind when going out of town, which makes the slightly painful experience worthwhile. As of late, the app unfortunately no longer works. Pressing up and down keys after logging on via ssh gives me garbage like ^[[A and ^[[B. Combinations with Ctrl - like you can see in the video - no longer works either. Writing full command lines and executing by the enter key works though. Being able to search my bash history was the difference between a usable app and endless frustration, so getting it to work is essential. The app has (of course) met its end of life, not getting updated anymore. I am not quite sure, which side (client or server) that has to be "fixed"/hacked to make the control sequences work again. But is there something I can do to make it work as intended? You can see a video of TouchTerm in operation here.

    Read the article

  • Win/Bios showing wrong RAM in Gateway Netbook?

    - by Ael
    I've seen similar problems, but this seems slightly unique, maybe... I have a Gateway LT21 netbook, Win7. I've upgraded RAM from 1gb to 2gb. It didn't work. So I updated to the latest Bios 1.25, then it worked. 2gb was recognized in the Bios and in Windows. Every was fine. Now today it seemed slow and, to my surprise, both the Bios and Win show only 1gb. :/ I've run memory diagnostic, no error. I entered bios and hit Exit and Save. Still 1gb. I took out the ram, put it back. Still 1gb. :/ CPU-Z shows 2048mb/2gb of RAM. Further testing: if i put in the old 1gb ram, turn on, then put in the new 2gb ram again, the Bios and Win show 2gb of ram. BUT, once restarted at all (even from Bios) it seems to go back to showing incorrect 1gb ram again. :// (There are very few options in the bios, none appear memory-related.) Any ideas?

    Read the article

  • Office 2010 OCT Outlook Filepaths

    - by vlannoob
    I'm playing around with customizing Office 2010 installs on my network, normally I just do a full manual install, but as the environment grows and the lazier I get its becoming a pain to do it manually every time. I've read up and downloaded the Office 2010 OCT tool and it looks relatively straight forward - with one exception - the Outlook Profile. I can 'get around it' by just leaving it all as default (or not enabling offline use) but I'd like to customise it slightly so that its all setup no matter who logs onto the PC. The only issue I have, and my question is: In the OCT - Outlook section What do you enter into the Path and Filename for the OST file and the Offline Address book seetings under Enable Offline Use section? I'm sweet with everything else - just that one section, and I think if I bugger that one it will kill the whole Outlook Profile?? It would need to go into each users unique filepath for their profile correct? I have a fair idea of what should be there but I'm struggling with the correct syntax. I know this is a stupid question....but its late in the day and my brain is fried ;) As usual - any and all help/assistance is appreciated ;)

    Read the article

  • sys.dm_exec_query_stats interaction with recompilation

    - by Sam Saffron
    We use sys.dm_exec_query_stats to track down slow queries and queries that are IO offenders. This works great, we get a lot of very insightful stats. It is clear this is not as accurate as running a profiler trace, as you have no idea when SQL Server will decide to chuck out a an execution plan. We have quite a few queries where the wrong execution plan is cached. For example queries like the following: SELECT TOP 30 a.Id FROM Posts a JOIN Posts q ON q.Id = a.ParentId JOIN PostTags pt ON q.Id = pt.PostId WHERE a.PostTypeId = 2 AND a.DeletionDate IS NULL AND a.CommunityOwnedDate IS NULL AND a.CreationDate @date AND LEN(a.Body) 300 AND pt.Tag = @tag AND a.Score 0 ORDER BY a.Score DESC The problem is that the ideal plan really depends on the date selected (screenshot of ideal plan): However if the wrong plan is cached, it totally chokes when the date range is big: (notice the big fat lines) To overcome this we were recommended to use either OPTION (OPTIMIZE FOR UNKNOWN) or OPTION (RECOMPILE) OPTIMIZE FOR UNKNOWN results in a slightly better plan, which is far from optimal. Executions are tracked in sys.dm_exec_query_stats. RECOMPILE results in the best plan being chosen, however no execution counts and stats are tracked in sys.dm_exec_query_stats. Is there another DMV we could use to track stats on queries with OPTION (RECOMPILE)? Is this behavior by-design? Is there another way we can for recompilation while keeping stats tracked in sys.dm_exec_query_stats? Note: the framework will always execute parameterized queries using sp_executesql

    Read the article

  • How can I setup BluePill to Monitor a Rails App Running via Passenger (mod_rails)

    - by Jim Jeffers
    I recently launched a site running phusion passenger. Unfortunately, the site went down due to a frozen thread. I was able to save the server by doing kill -9 to the specific PID. Still though, I thought passenger was able to manage this automatically. I have a server with 1GB of memory running one rails app with passenger allotted up to 7 instances. However, when I came to discover the site went down I found that passenger had spawned 6 instances with one of them using up over 800mb of memory causing the server to swap. As a result I am hoping to setup something like bluepill on the server but I'm slightly confused as to how you go about doing it. Mainly because bluepill expects to start/stop the processes it's monitoring. However, in our case, passenger already restarts processes for us so we only need to monitor the pids of passengers instances and kill them once they've gotten too large. Has anyone here setup BluePill to monitor a rails app running under phusion's passenger? Any insight would be useful.

    Read the article

  • Performance experiences for running Windows 7 on a Thin-Client?

    - by Peter Bernier
    Has anyone else tried installing Windows 7 on thin-client hardware? I'd be very interested to hear about other people's experiences and what sort of hardware tweaks they had to do to get it to work. (Yes, I realize this is completely unsupported.. half the fun of playing with machines and beta/RC versions is trying out unsupported scenarios. :) ) I managed to get Windows 7 installed on a modified Wyse 9450 Thin-Client and while the performance isn't great, it is usable, particularly as an RDP workstation. Before installing 7, I added another 256Mb of ram (512 total), a 60G laptop hard-drive and a PCI videocard to the 9450 (this was in order to increase the supported screen resolution). I basically did this in order to see whether or not it was possible to get 7 installed on such minimal hardware, and see what the performance would be. For a 550Mhz processor, I was reasonably impressed. I've been using the machine for RDP for the last couple of days and it actually seems slightly snappier than the default Windows XP embedded install (although this is more likely the result of the extra hardware). I'll be running some more tests later on as I'm curious to see particularl whether the streaming video performance will improve. I'd love to hear about anyone's experiences getting 7 to work on extremely low-powered hardware. Particularly any sort of tweaks that you've discovered in order to increase performance..

    Read the article

  • SharePoint blog site won't search local site... you can only search for Mysites and users

    - by Don
    I have a Howto company Blog site that i post to for my clients to access for help. For some reason it has stopped letting anyone search on it. I can search for Mysites or users. But when you drop down the tab to search: This Site: "blog site name" you get the following reply: No results matching your search were found. Check your spelling. Are the words in your query spelled correctly? Try using synonyms. Maybe what you're looking for uses slightly different words. Make your search more general. Try more general terms in place of specific ones. Try your search in a different scope. Different scopes can have different results. I have tried the following command: from the Index server 1-net stop osearch 2-net start osearch 3-iisreset /noforce But still not able to search a local blog site I can only search for users and Sites. please help Don

    Read the article

  • download and process a file by ftp at set intervals, with error handling, rescheduling and status messages

    - by compound eye
    I want to download a data file from a remote ftp server to my machine at regular intervals. Once the file is downloaded I want to call another script which will process the file. My development machine is mac os x, the eventual deployment environment is linux. What's would be the stock standard way to automate this? I know I can use cron to schedule curl to download and to run a script that will process the downloaded file at regular intervals, and I know could write a slightly more complex script or an application that would do this and add error handling, rescheduling and sending status emails. But one of my requirements for this project is to write as little custom code as possible, instead I should try to use standard, tried and true existing tools, and if I do have to write code, to try and write the most straightforward code possible. The reason for this is the code will potentially be installed on a large number of machines, all of which will need to be tweaked, customised and maintained by different people, long after I am gone from the project, so the intention is to use well documented, well supported tools as much as possible. This seems such a common task, there must be tools and scripts all over the internet, written by people who have carefully considered everything that could possibly go wrong when you need to download and process a file from a remote server at regular intervals, with error handling, rescheduling and sending status messages. Is that what Expect is for? What would you recommend? (the system will be downloading weather prediction data every six hours, so that the system can prepare in the event of bad weather warnings)

    Read the article

  • At what point does the performance gap between GPU & CPU become so great that the CPU is holding back a system?

    - by Matthew Galloway
    I know that generally speaking for gaming performance the GPU is the primary factor which holds back performance, with everything else such as RAM/motherboard/PSU/CPU being secondary in importance to the graphics card. But at some point the other components ARE going to be significant in holding back the whole system! For instance nobody would be silly enough to play modern games with 512MB RAM and the very latest graphics cards (such as an HD7970) as I bet the performance increase over such a system with only 512MB but a mid range card would be non-existent! Thus it would be a "waste" for such a person to buy any high end graphics card without resolving first the system's other problems. The same point applies to other components, such as if it only had a Pentium II a current high end graphics card would be wasted on it! So my core question is how do you determine at what point for your system is spending on extra GPU power be completely "wasted"? (also, a slightly more nuanced question is trying work out at what point might the extra graphics power not be "wasted" but would be "sub optimal" value for money, when the expenditure should then be split around graphics card and other components. As obviously a gamer shouldn't always just spend on upgrading the graphics card! But needs to balance it out)

    Read the article

  • Figuring out which PC part is faulty

    - by Davy8
    I have an odd scenario and I'm having trouble figuring out which is the faulty component. First of all, the video doesn't work, monitor says it's not getting a signal. Monitor's not faulty (works on other computer) so the first suspect was video card. However 2 things make me think it's not the video card. (Don't have another machine with PCIe around to test definitively) First, the GPU fan is spinning so it's getting power. Second, tried putting in an older PCI video card that is known to be working (pulled out of another working machine) and there's still no video. Normally if it's not the video card I'd suspect the motherboard, but everything's getting power on the mobo, so I'm not sure. The case apparently doesn't have system speakers, so can't hear any of the diagnostic beeps either. Also not sure whether a faulty CPU would cause no image at all either. The parts are brand new so something's going to get RMA'd but I'm not sure which component is to blame in this case. (Only slightly related, but I also accidentally put too much thermal paste on the CPU. The fan/heatsink instructions said to put the whole tube which seemed like a lot compared to previous experience, and as I started squeezing I knew it was definitely too much and stopped at about 1/3 but against my better judgement I didn't wipe any off. I'm not sure whether that would cause problems other than not cooling as effectively as it should)

    Read the article

  • Alternative method of viewing a database diagram in SQL Server to see what tables have gone missing?

    - by Triynko
    I have a database diagram for my database, but when I open it in SQL Server, I almost immediately get a message saying some permissions changed or tables in the diagram were dropped or renamed, and tables in the diagram vanish before I can even scroll over to see what or where they were. Basically, it's saying, "Hey, you know all that time you spent laying out tables in this diagram... half of them are going to vanish when you view it, and I'm not going to tell you which tables vanished or where they were in the diagram. You're just going to see a bunch of random empty spaces where tables used to be ;)" Ridiculous. So I thought that maybe if I look in the dbo.sysdiagrams table, I could look at some plain text definition of the diagram to get a clue about the names of the tables that went missing (because thier names were probably only changed slightly) or their coordinates in the diagram (because their spatial location would give me a clue as to what they were), so that I could re-add them, but I can't, because it's a binary definition. So, is there some other program I could use to view the existing database diagram that's not going to just drop and forget the missing tables without telling me what they were, or is this information lost and at the mercy of some SSMS-proprietary database diagram format and viewer which refuses to cooperate with me.

    Read the article

  • How to automatically start VM created by virt-manager?

    - by Jeff Shattock
    I have created a virtual machine with virt-manager that runs on kvm/qemu. The machine works well when started through virt-manager. However, I would like to be able to start and stop the VM through a script in init.d, so that it comes up and down along with the host. I need to have virt-manager show that the machine is running, and to be able to connect to its console through there. When I use the command line that is produced by running ps -eaf | grep kvm after starting the vm through virt-manager, I get some console messages about redirected character devices, but the machine does start and runs properly. However, I do not get any indication from virt-manager that it has started. How can I modify the command line to get virt-manager to pick up the running VM? Is there anything else about the command line that should change when starting outside of virt-manager? Command line is (slightly reformatted for readability): /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 512 -smp 1 -name BORON \ -uuid fa7e5fbd-7d8e-43c4-ebd9-1504a4383eb1 \ -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/BORON.monitor,server,nowait \ -monitor chardev:monitor -localtime -boot c \ -drive file=/dev/FS1/BORON,if=ide,index=0,boot=on,format=raw \ -net nic,macaddr=52:54:00:20:0b:fd,vlan=0,name=nic.0 \ -net tap,fd=41,vlan=0,name=tap.0 -chardev pty,id=serial0 -serial chardev:serial0 \ -parallel none -usb -usbdevice tablet -vnc 127.0.0.1:1 -k en-us -vga cirrus

    Read the article

  • How do I configure Jetty (via jettyrunner) so that it names a character set in the Content-Type response header?

    - by Pointy
    I use Jetty (via the oh-so-handy Jetty Runner) for day-to-day web application testing. One thing I've recently stumbled on is the fact that I don't get a character set called out in the "Content-Type" response header all the time. I do get it in response to my application's XMLHttpRequest transactions, but not for plain old pages loaded by <a> links or whatever. I've read a little bit about how to set up a Jetty config file, but I've never been able to completely understand that; all servlet containers are complicated, and while Jetty is pretty simple it's just weird enough that I don't grok the overall idea. Thus, all I do to launch my app is to run the Jetty Runner .jar file with a couple of simple arguments to set up the port number and logfile path, and then I just give it the .war file to run. It works great — except for the missing character set :-) Anybody have a quick sample config file that might fix this? edit — oh if it matters, I'm running Jetty 7.0.0 RC3; I've also tried with a slightly newer version (still 7.something) with exactly the same issue. All my testing is on Ubuntu.

    Read the article

  • Why do I get swap space related errors when I still have lots of free memory in Solaris 10?

    - by Tom Duckering
    I am seeing a few of my services suffering/crashing with errors along the lines of "Error allocating memory" or "Can't create new process" etc. I'm slightly confused by this since logs show that at the time the system has lots of free memory (around 26GB in one case) of memory available and is not particularly stressed in any other way. After noting a JVM crash with similar error with the added query of "Out of swap space?" it made me dig a little deeper. It turns out that someone has configured our zone with a 2GB swap file. Our zone doesn't have capped memory and currently has access to as much of the 128GB of the RAM as it need. Our SAs are planning to cap this at 32GB when they get the chance. My current thinking is that whilst there is memory aplenty for the OS to allocate, the swap space seems grossly undersized (based on other answers here). It seems as though Solaris is wanting to make sure there's enough swap space in case things have to swap out (i.e. it's reserving the swap space). Is this thinking right or is there some other reason that I get memory allocation errors with this large amount of memory free and seemingly undersized swap space?

    Read the article

  • DVD/CD burning .zip: is it more reliable, faster, longer lasting to burn a zip of files rather than the files as a folder?

    - by Rob
    Is it more reliable, faster, longer lasting to burn to CD/DVD a zip (or a few large zips) of files rather than the files as a folder? Just thinking if 1000s of small files would not be as efficiently recorded compared with one or a few large zips. Also, even after the burning program verifies the disc, I also use Beyond Compare to compare the files with those on the disc. Always binary compares as identical but I hear the drive stuttering presumably as the head is being shifted only slightly each time to seek the next file, which leads me to think that its best to make one or more zips and copy those locally to compare. Or is it that burning invidual files to the disc is not as readable which causes the head to stutter. There aren't any problems, my disc burns are reliable, just thinking more of efficiency and longevity, the discs burn and verify fast enough on my 18x DVD burner. I'm using ImgBurn mostly. Also used Nero in the past. I burn whole discs closed, finalised. Not sure which write mode but would think Disc At Once from a temporary cached image made by the burning program would be the most reliable.

    Read the article

  • Suggestions for cleaning up the mess after removing the "system tool" virus?

    - by Ross
    Hi! Last night I got infected with the "System Tool" virus. For those who don't know it disallows the user from executing any software, changes the desktop, stops all security software from running, and continually requests that you buy a Trojan security software. It took me a few hours but I finally managed to remove the software. To do this I went into my Ubuntu partition and searched out files that had been created around the time that I got infected and deleted the executable. Then I went back into my W7 partition and ran an MBAM full scan, an MSE full scan, an AVG bootable USB scan, and ran a ClamAV scan from my Ubuntu partition (Together these found 3 more infected executables). I also ran a Ccleaner full sweep and the registry cleaner just in case. I think I have found all of the problems but am still concerned that there might be a payload leftover from the virus that I didn't find. Do you have any suggestions of what else I can do to be sure. Just FYI I use W7 64 bit and MSE as my primary antivirus. I was using chrome when I got infected and it seems that it was due to a slightly out of date Java installation (MSE gave me a warning that the website had used a Java exploit and then my desktop changed to the classic "System Tools" desktop) Thank you very much for your help.

    Read the article

  • Which RAM is faster (or, is Crucial's Memory Advisor giving non-optimal advice)?

    - by adpe
    In general, if a PC's motherboard is only specified for RAM up to a given core speed x, will that PC be faster with: RAM of latency y capable of running at a maximum core speed >x or RAM of latency <y capable of running at a maximum core speed of exactly x ? I would have thought the latter, but Crucial's Memory Adviser tool advises the former. So, which of us is correct - me, or the machine? (Here is a concrete example: I wish to upgrade a Toshiba Satellite Pro L300-155 laptop from its current 1GB RAM to 2GB Crucial RAM. The laptop's specifications are given here. I see from those specifications that the laptop is designed for DDR2-667 Ram. Crucial sells two compatible 2GB kits, priced exactly the same as each other: DDR2-667, CL=5; DDR2-800, CL=6. It seems to me that of these two upgrade kits, the first kit would run slightly faster on the L300-155 than the second, because both will presumably be capped at DDR2-667 core speed (see laptop specs), but the second kit has more latency. However, Crucial's Memory Advisor tool recommends the second kit.)

    Read the article

  • What is the name of the this DOS font? Where and how to add it? Why is there a 1 pixel gap?

    - by JBeurer
    So basically I somehow stepped into this webpage: www.braindamage.vg And the first thing that hit me hard was the lovely DOS fonts, so naturally I wanted to get them into my IDE badly. Opened the html source file and CSS file to find the font name: @font-face { font-family: 'Perfect DOS VGA 437'; src: url('http://www.braindamage.vg/wp-content/themes/braindamage/dosfont.eot'); } @font-face { font-family: 'Perfect DOS VGA 437'; src: url('http://www.braindamage.vg/wp-content/themes/braindamage/dosfont.svg#dos') format("svg"), url('http://www.braindamage.vg/wp-content/themes/braindamage/dosfont.ttf') format ('truetype'); } So I download the font, add it using Control Panel - Fonts. But once I start using it (notepad, MSVS 2008 & MSVS2010) I notice that it looks slightly off: It seems like there's 1 extra pixel between each character. How it should look: What is causing it and how to fix this? Is it the windows XP? (i have disabled font smoothing) Or is there something wrong with the font file?

    Read the article

  • D-Link wireless router losing outbound data

    - by gsteinert
    I have a Linux box running the Apache web server behind a D-Link wireless router (nothing fancy, just standard kit that comes with Virgin Media broadband). My issue is that when requesting web pages (from within the network or via the web), the back end of the page seems to be being dropped. For example, I tried to display a text-only file, and all I could get was the first 40-70% of the file (it changed slightly with each refresh). The apache access logs show that only part of the data was being sent (~6000 bytes instead of the 12000+ bytes of the file). Removing my router from the equation fixes the issue and I can download any files no matter the size with no problems. My theory is that the uploaded packets are either being dropped or held up by the config of the router. Is there anything I can do to alleviate the problem? (Perhaps a way of reconfiguring the router to upload packets harder/better/faster/stronger or an option in apache that provides a workaround) As a last resort I will get a second NIC for my Linux box and turn it into a router, but that would mean the box will be on 24/7... not the most ideal of circumstances. Gary

    Read the article

  • Hard Reset USB in Ubuntu 10.04

    - by Cory
    I have a USB device (a modem) that is really finicky. Sometimes it works fine, but other times it refuses to connect. The only solution I have found to fix it once it gets into a bad state is to physically unplug the device and plug it back in. However, I don't always have physical access to the machine it is plugged in on, so I'm looking for a way to do this through the command line. This post suggests running: $ sudo modprobe -w -r usb_storage; sudo modprobe usb_storage However I get an "unknown option -w" output. This slightly modified command: $ sudo modprobe -r usb_storage Fails with the message FATAL: Module usb_storage is in use. If I try to kill -9 the processes marked [usb-storage] before running they refuse to die (I think because they are deeply tied to the kernel). Anyone know of a way to do this? NOTE: I cross-posted this on serverfault as I didn't know which was more appropriate. I will delete and/or link whichever one is answered first.

    Read the article

  • apache/httpd responds slower under EL6.1 than EL5.6 (centos)

    - by daniel
    I've read through other threads on performance differences between RHEL6 and RHEL5, but none seem a tight match to mine. My issue manifests itself in slightly slower average response time (20ms) per request. I have about 10/10 servers of the same hardware spec with Cent6.1 and Cent5.6. The issue is consistent across the group. I am running Ruby on Rails with Passenger. Apache config is identical (checked out from the same SVN repo) Ruby and Passenger are identical builds. Application is identical and being served traffic round robin. mod_worker An interesting clue from server-status: The Cent6.1 servers have a steady 20-40 threads in the "Reading Request" state while the Cent5.6 servers have around 1. I'm graphing this so I can see it trend over time. I also have a bunch of much newer machines that are significantly faster and are running Cent6.1. They dust all the older machines in response time, but I can see they also have a steady 20-40 threads in the "Reading Request" state. This makes me believe I can get their response time down, if I can figure out what is holding up these requests. My gut is telling me that I need to tune some network setting in sysctl, but I haven't figured it out yet. Help is appreciated.

    Read the article

  • troubleshooting postifx -> exchange connection issues

    - by Systemspoet
    I have three linux-based mail routers that run postfix and relay mail to our on-premise exchange server as well as to outlook.com, splitting the mail based on ldap atttributes. What I've observed sporadically since upgrading this spring from Exchange 2007 to 2010 is that all three of the mail relays will, for about 20 minutes, fail to connect to exchange. Postfix logs it as "lost connection with exchange.contosso.edu" ; this problem almost always occurs to all three mail relays at the same time, and lasts for slightly under 20 minutes. If I can catch it while it's occuring, and I manually do "telnet exchange.contosso.edu 25" from one mail relay and force a message through (helo, mail from, rcpt to, data, etc), then it clears that relay up. The exchange "server" is actually two machines with the HT role on them, load balanced via windows NLB. I've worked pretty hard to figure out what's happening from the postfix side and I can't see any evidence of any misbehavior. My question is, how do I attack the problem from the exchange side? Is there a connection log, or a debug setting, or something I can do to log all of the inbound connections and tell me what's causing exchange to drop them?

    Read the article

  • The requested operation has failed! (cannot find answer)

    - by Geoff
    I know this problem is plastered all over the web but I've been searching and trying for hours with no luck. Can someone please give me some help? I originally installed Apache 2.0.64 along with PHP 5.2.17, I went through all of the steps in this tutorial with no luck, I found that the culprit was the LoadModule line. After looking on the internet I found a whole bunch of stuff but a lot of it was referring to PHP 5 and Apache 2.2. Since there seemed to be more info on apache 2.2 I removed apache 2.0.64 and installed 2.2. I added the code to LoadModule in the conf file but I got the same problem. I then followed the steps in this tutorial because it was slightly different with some things I hadn't tried yet but still I get the same problem. If I comment out LoadModule... it works fine but otherwise I get "The requested operation has failed!". This is what I ended up keeping since it works only having to comment one line. LoadModule php5_module "c:/php/php5apache2_2.dll" <IfModule mod_php5.c> AddType application/x-httpd-php .php PHPIniDir "c:/php" DirectoryIndex index.php </IfModule> EDIT: How can I stop getting this error message? UPDATE: Also, please note that I took note of the message in the PHP site that stated if PHP 5.2 was to be run with Apache to use the VC6 and not VC9. I had VC9 so I replaced it with VC6, the file is labeled php-5.2.17-nts-Win32-VC6-x86.zip

    Read the article

  • Automating Access 2007 Queries (changing one criteria)

    - by Graphth
    So, I have 6 queries and I want to run them all once at the end of each month. (I know a bit about SQL but they're simply built using Access's design view). So, in the next few days, perhaps I'll run the 6 queries for May, as May just ended. I only want the data from the month that just ended, so the query has Criteria set as the name of the month (e.g., May). Now, it's not hugely time consuming to change all of these each month, but is there some way to automate this? Currently, they're all set to April and I want to change them all to May when I run them in a few days. And each month, I'd like to type the month (perhaps in a textbox in a form or somewhere else if you know a better way) just once and have it change all 6 queries, without having to manually open all 6, scroll over to the right field and change the Criteria. Note (about VBA): I have used Excel VBA so I know the basics of VBA but I don't really know anything specific to Access (other than seeing code a few times). And, others will use this who do not know anything about Access VBA. So, I think I have found a similar question/answer that could do this in VBA, but I'd rather do it some other way. If the query needs to be slightly redesigned later, probably by someone who doesn't know Access VBA at all, it'd be nice to have a solution not involving VBA if that is even possible.

    Read the article

  • How to train users converting from PC to Mac/Apple at a small non profit?

    - by Everette Mills
    Background: I am part of a team that provides volunteer tech support to a local non profit. We are in the position to obtain a grant to update almost all of our computers (many of them 5 to 7 year old machines running XP), provide laptops for users that need them, etc. We are considering switching our users from PC (WinXP) to Macs. The technical aspects of switching will not be an issue for the team. We are in the process of planning data conversions, machine setup, server changes, etc regardless of whether we switch to Macs or much newer PCs. About 1/4 of the staff uses or has access to a Mac at home, these users already understand the basics of using the equipment. We have another set of (generally younger) users that are technically savvy and while slightly inconvenienced and slowed for a few days should be able to switch over quickly. Finally, several members of the staff are older and have many issues using there computers today. We think in the long run switching to Macs may provide a better user experience, fewer IT headaches, and more effective use of computers. The questions we have is what resources and training (webpages, Books, online training materials or online courses) do you recommend that we provide to users to enable the switchover to happen smoothly. Especially, with a focus on providing different levels of training and support to users with different skill levels. If you have done this in your own organization, what steps were successful, what areas were less successful?

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >