Search Results

Search found 21960 results on 879 pages for 'program termination'.

Page 701/879 | < Previous Page | 697 698 699 700 701 702 703 704 705 706 707 708  | Next Page >

  • How do I export address book from N97

    - by mplungjan
    Hi, I need to copy my numbers from my private Nokia to my office BB. I have not found a way to export my phone numbers from ovi or elsewhere. On Mac iSync stopped working with snow leopard and OVI on windows does not export. I do not mind using a windows suggestion. I lost a description on how to use the ovi backup files in another program. What I have done so far terminal: sudo open -a iSync.app - it launched but iSync said "this device is not supported by iSync" went here http://europe.nokia.com/support/product-support/isync/compatibility-and-download found a plugin (I am sure that was not there a while ago :| ) Checked software version 22.0.110 installed plugin Ran iSync which found and installed my N97 device successfully. synced. It stopped with The connection was lost while talking to the phone. http://discussions.europe.nokia.com/t5/Nseries-and-S60-Smartphones/N97-iSync-Multimedia-Transfer-Modem/m-p/568560 no news since Jan 2010. Tried to download and install http://best-vcard.en.softonic.com/symbian but the installer fails :( I simply do not understand why Nokia is giving us such a hard time. I would not have considered switching from Nokia if Mac had been better supported. It is so frustrating that they just seem not to care losing Nokia fanbois like me - especially since I am this outspoken on the net and what i say on popular forums gets indexed by google fast. I am very close to just go iPhone here. Hope someone has Nokia's ears UPDATE: I Downloaded NbuExplorer from sourceforge. It will extract everything from an OVI backup into VCF, VCS and VMG files. Very useful software and free.

    Read the article

  • can not connect to SQL running on amazon ec2 machine

    - by njj56
    I am using SQL managment studio 2008 running on an Amazon EC2 machine. I am unable to connect to the database in my asp.net application. The EC2 instance has been set to accept connections over the SQL port. I am also able to remote the machine as well as view websites hosted on the server. Listed below is part of the connection string relating to this instance. When the program is ran and this connection string is called, it returns tcp error 0 - no return response. it just times out. <add name="ProjectServer" connectionString="Data Source=*IP ADDRESS HERE*,1433;Initial Catalog=*Catalog Name*;User ID=IP-0A6ED514\Administrator;"/> I removed the ip and the catalog name for the example, but I am sure they are correct. The only thing that I could think may cause an error, is the differences in names between the user id and the server name - the server name is ip-0A6ED514\sharepoint but the user name is ip-0A6ED514\administrator when I log into the sql server manager on the EC2 instance. A password is not used. Not sure if I would need to leave in a blank string for password - also not sure if the difference between server name and user id to log in makes a difference. Any help is appreciated. Thank you. update - when this connection string is used with out the port, i get tcp provider error 40 - when the port is in there, i get error 0 edit- the sql server is using windows authentication - does this make a difference? Usually I always use SQL server authentication

    Read the article

  • Networking DOS within Windows 7 XP Mode, with a Windows XP/7 Networked Share

    - by theonlylos
    For awhile now, one of my clients has been stuck with Corel Paradox 4.0 (it used to be the biggest database system in the DOS days, until Microsoft released Access in the early 90's) so for awhile I've managed to keep it on life support on Windows XP for a few years, however since switching to Windows 7 x64, I've had to resort to using XP Mode as the sandbox to keep it up and running. While I am able to run Paradox as usual in XP Mode, I'm having a serious issue where if I try connecting the install to the network share (which is located on the Windows 7 portion of the system), Paradox keeps exiting because it says the serial number is invalid. Now, I know for a fact that this is an issue with the virtual loopback adapter and also having the VM linked to the physical ethernet adapter -- and while I have solved this issue before, most of my fixes have been bandages since after a few weeks the issue pops up again. Long story short, I wanted to ask if there is a permanent way to link a DOS program to a network share address. For example, when I try doing \tsclient\paradox (the Windows 7 Address) I keep getting an error saying I need a valid network address. I've tried mapping that folder to various drive letters such as P:\Paradox -- but for some reason that keeps failing over time. For what it's worth, Paradox uses a .SOM file to store the network settings, however it isn't editable in Notepad but rather it's controlled by a wizard in Paradox. But if that extension rings any bells, I'd welcome any insights.

    Read the article

  • How do photoshop slices and layer comps interact?

    - by Steve314
    I'm interested in using Photoshop (I have CS2) for some user interface design. I was hoping to be able to use slices and layer comps to mark out particular elements, and use Javascript scripting to export multiple graphics files and text descriptions (positions and sizes of slices mainly) that will be used by my program. My problem is that I've never used Photoshop for web design, or otherwise used slices, and I'm not confident that I understand how they interact with layer comps. This is what I believe (and hope) is correct... Manual slices aren't affected by layer comps in any way - they aren't saved as part of a layer comp. The same manual slices will be active irrespective of which layer comp is selected. Layer-based slices aren't directly affected by layer comps, but they are indirectly affected in that the layer comp saves details of layer position and style. Thus selecting a layer comp may move a layer and change its style, affecting the location and size of its layer-based slice, or may effectively disable the slice by hiding the layer. Automatic slices aren't directly affected by layer comps, but are indirectly affected due to changes to the layer-based slices. So, layer based slices (which are my main interest) may move, may change size (to accomodate a style such as a drop shadow), and may be effectively disabled by the layer being hidden. Other details (and all details of manual slices) will remain constant irrespective of which layer comp is active. Is that correct?

    Read the article

  • Swapping Function (Fn) and Control (Ctrl) Keys on Lenovo ThinkPad W500

    - by Howiecamp
    I'd like to swap the Fn and Ctrl keys on my ThinkPad W500 (like many others! See: How can I switch the function and control keys on my laptop? and Intercepting the Fn key on laptops) Numerous folks indicate that Windows doesn't register the Fn key as a keypress but using Mihov ASCII Master 2.0, that gives the ASCII value of a keypress, I see the Fn key returning FF (perhaps FF in this case means 'not registered'). I also see that keys like Ctrl register with one ASCII code when pressed alone and another when pressed in combo with another key. Fn will only register when pressed alone, so Windows definitely isn't seeing the combo. This took a solution like AutoHotKey off the table. I ran KeyTweak (which shows you the hardware scan codes of a keypress and the Fn key registerd as 57443). Using this program I remapped Fn to the Ctrl key; this worked perfectly. However, I suspect that because of the issue in #1, the combo of, for example, Fn + C did not execute a copy. Short of retraining my pinky I'm actually considering removing the keyboard and resoldering the connections to swap those keys. I'd love to get some input as to the root technical issue(s) and possible solutions here.

    Read the article

  • With Ubuntu 9.10, my DVD keeps spinning up

    - by Ken
    I have Ubuntu 9.10 on my Intel Mac Mini. When there's a DVD in the drive, and even if there's no program open at all (just looking at the desktop), every minute or so the disc spins up with a loud whirring noise, and I can hear it cranking the motor to seek across the disc. How do I find out what's causing this? And how can I make it stop? Thanks! EDIT: I straced nautilus, and saw nothing it's doing directly, even when the disc spins up. It does poll inotify regularly, but I don't know how to trace what it's watching, or if that's even how it receives disc-inserted notifications. It doesn't call inotify_add_watch when I insert a disc or mount it (or eject or umount), but it could be watching all of /dev already or something like that. Of course, a DVD is mounted read-only, so whether it's inotify or something else, it should never need to poll anything on that. And if it is inotify, it's happening in the kernel, and the kernel should really never need to poll a device it's mounted to check for notifications.

    Read the article

  • When running PowerShell script as a scheduled task some Exchange 2010 database properties are null

    - by barophobia
    Hello, I've written a script that intends to retrieve the DatabaseSize of a database from Exchange 2010. I created a new AD user for this script to run under as a scheduled task. I gave this user admin rights to the Exchange Organization (as a last resort during my testing) and local admin rights on the Exchange machine. When I run this script manually by starting powershell (with runas /noprofile /user:domain\user powershell) everything works fine. All the database properties are available. When I run the script as a scheduled task a lot of the properties are null including the one I want: DatabaseSize. I've also tried running the script as the domain admin account with the same results. There must be something different in the two contexts but I can't figure out what it is. My script: Add-PSSnapin Microsoft.Exchange.Management.PowerShell.E2010 Write-EventLog 'Windows PowerShell' -source PowerShell -eventid 100 -message "Starting script" $databases = get-mailboxdatabase -status if($databases -ne $null) { Write-EventLog 'Windows PowerShell' -source PowerShell -eventid 100 -message "Object created" $databasesize_text = $databases.databasesize.tomb().tostring() if($databasesize_text -ne $null) { $output = "echo "+$databasesize_text+":ok" Write-EventLog 'Windows PowerShell' -source PowerShell -eventid 100 -message "Path check" if(test-path "\\mon-01\prtgsensors\EXE\") { Write-EventLog 'Windows PowerShell' -source PowerShell -eventid 100 -message "Path valid" Set-Content \\mon-01\prtgsensors\EXE\ex-05_db_size.bat -value $output } Write-EventLog 'Windows PowerShell' -source PowerShell -eventid 100 -message "Exiting program" } else { Write-EventLog 'Windows PowerShell' -source PowerShell -eventid 100 -message "databasesize_text is empty. nothing to do" } } else { Write-EventLog 'Windows PowerShell' -source PowerShell -eventid 100 -message "object not created. nothing to do" } exit 0

    Read the article

  • Ubuntu 12.04 froze during update, won't boot

    - by Cichol
    I've recently installed Ubuntu 12.04 on my laptop, and every time i tried to update it, it would freeze for a few seconds, and tell me that the updates could not be downloaded. After many, many tries I managed to get them downloaded, but then in the middle of installing them, it froze. Completely. No mouse movement, no blinking lights, no nothing. After a few hours of letting it sit there, I finally hit the power button to do a hard reset, and now when I select Ubuntu on the boot screen (Dual-boot with Windows 7), I get a blank purple screen, and then nothing. Another freeze. I've tried getting into the console, but no command I input has any visible effect. I have a ton of music stored on the partition it's in, so I'd really rather not have to reinstall. My specs, to the best of my knowledge: Clevo Corp model B7130 (Sager custom) CPU: Intel Core i5 @ 2.53gHz (4 CPUs) Graphics Card: Nvidia GeForce 425m 4096 MB RAM Drivers: Whatever comes with the download of 12.04. As a side-note, I installed Ubuntu via the Windows Installer program (wubi). Does that make a difference?

    Read the article

  • Files built with a makefile are disapearing (including the binary)

    - by Reid
    I am building a program on a TS-7800(SBC), and when I run make (show below), it appears to go through all of the steps normally, but in the end i do not get a binary file. Why is this, and how can I get my file. makefile CC= /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc # compiler options #CFLAGS= -O2 CFLAGS= -mcpu=arm9 #CFLAGS= -pg -Wall # linker LN= $(CC) # linker options LNFLAGS= #LNFLAGS= -pg # extra libraries used in linking (use -l command) LDLIBS= -lpthread # source files SOURCES= HMITelem.c Cpacket.c GPS.c ADC.c Wireless.c Receivers.c CSVReader.c RPM.c RS485.c # include files INCLUDES= Cpacket.h HMITelem.h CSVReader.h RS485.h # object files OBJECTS= HMITelem.o Cpacket.o GPS.o ADC.o Wireless.o Receivers.o CSVReader.o RPM.o RS485.o HMITelem: $(OBJECTS) $(LN) $(LNFLAGS) -o $@ $(OBJECTS) $(LDLIBS) .c.o: $*.c $(CC) $(CFLAGS) -c $*.c RUN : ./HMITelem #clean: # rm -f *.o # rm -f *~ Output root@ts7800:ReidTest# make /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c HMITelem.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c Cpacket.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c GPS.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c ADC.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c Wireless.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c Receivers.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c CSVReader.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c RPM.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c RS485.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -o HMITelem HMITelem.o Cpacket.o GPS.o ADC.o Wireless.o Receivers.o CSVReader.o RPM.o RS485.o -lpthread Thank you.

    Read the article

  • Rsync: General file/folder synchronization

    - by Rey Leonard Amorato
    I have a file server, which is in-charge of pulling a folder tree from multiple workstations on a daily basis. My current method for this is by using rsync, (which works pretty well provided directory names and/or files remain the same) however, when files are renamed or moved about within subdir1, rsync will copy them over to the server, creating duplicates. I have to manually find and delete extraneous files/folders that had been left on the server during previous syncs. Note that I cannot use rsync's --delete flag because any sync from a workstation will then mirror that particular folder tree, instead of merging them to the server. Visual diagram: Server: Workstation1 Workstation2 Workstation(n) Folder* Folder* Folder* Folder* -subdir1 -subdir1 -subdir1 -subdir(n) -file1 -file1 -file2 -file(n) -file2 -file(n) Is there a simple script (preferably in bash, nothing fancy) that can accomplish the deletion of the extraneous files/folders in the event a file is renamed or moved to a different subdir? Is there a different program, much like rsync that can accomplish this task autonomously and in a much simpler manner? I have looked at unison, but I did not like the fact that it keeps a local database for the syncing info. Any tips at all as to how I am supposed to tackle this? Thank you in advanced for your help. EDIT: I have tried unison just recently and I can safely say it is out of the question now. unison is a bi-directional synchronization tool and from my testing, it mirrors the files existing on the server to all workstations. - This is unwanted. preferably, i would want files/folders to stay within their respective workstations and just merge to the server. AKA uni-directional sync; but with renames/moves propagated to the server. I might have to look into Git/Mercurial/Bazaar as mentioned by kyle, but still unsure if they are fit for the job.

    Read the article

  • Least CPU intensive way of streaming your screen on windows?

    - by sinni800
    Hello, sometimes I like capturing my screen for others to see. Only thing: I am playing games while I do it. I have tried a few streaming solutions where Windows Media Encoder coupled with my own Windows server appealed to me most, because I can change resolutions, etc. I also tried ustream coupled with the Flash applet and the Adobe Flash Encoder recording a Camtasia source. Camtasia has the disadvantage though that it shows the green-and-black-alternating borders and can not be targeted fullscreen. I like how xfire does it. But it doesn't work with every game, many are simply not supported. A few thoughts about this: Is there a program which captures like Fraps or XFire (based on Direct3D and OpenGL outputs) and exposes the output to a DirectShow source filter? Which brings me to: Is there hardware accelerated capturing directly from the graphics card? Maybe including direct encoding with help from OpenCL? Modern graphic cards decode BluRay content directly for example. I should have a modern enough graphics processor for this to be possible (see below). If using Windows Media Encoder: Which are the least CPU intensive settings? Which codec? Is there a newer codec than Windows Media 9? Is it less CPU intensive? I only have 7, 8 and 9 inside the Encoder Could the performance be massively increased by having a Quad-Core CPU (see below)? Bandwidth is no problem up to 1000 to 1500 kbit/s (I have 2048). My Computer specs: Intel Core 2 Duo E8400 4 GB DDR2-800 Ram Ati Radeon HD5770 Using Windows 7 Professional

    Read the article

  • How To Map Fn+Key (Multiple Keys) To Other Key in Windows?

    - by Mohamed Meligy
    I've got a ThinkPad W530 laptop, which replaces the Application Key (also known as Menu Key or Mouse Right Click key, usually between Right Alt and Right Ctrl) with the Print Screen key. So, I need to replace Prt/Scr with Application key. That's easy, all key mapping software like SharpKeys or whatever can do that. There are even a few threads in SuperUser about dozen of those. But then, the different part to this question from the others (which is why it's not a duplicate, I think) is that I also don't want to completely lose the Prt/Scr key. I'm thinking about replacing it with either: Fn + Prt/Scr Fn + F2 These seemed to have no special meaning, so, I'm not overriding anything, just adding functionality to either of them (one of them, not both), to be the new Prt/Scr key. I couldn't find any key mapping software that can detect or let me select more than one key to map, even when the other key is something like Fn key (although they all can map Fn key itself, without combination). I know it may make sense why this restriction exists, but it'll be really useful if I can override it though. Do you know any program that can do that? Thanks a lot.

    Read the article

  • Batch copy gives errors, xcopy works fine

    - by ndm13
    I am writing a general file backup program. It searches the drive for files matching a set of types and then writes them to a folder on the desktop. I wrote it using xcopy on Windows XP but upon learning that xcopy was deprecated in favor of robocopy in Vista and newer, still wanting to maintain compatibility I decided to switch to the non-deprecated copy. This is where the problems begin. I'm trying to fix the copy routine. I thought I had everything sorted out, but it doesn't copy anything. My output is zero files copied for every iteration. Original Code using xcopy: for /r %%a in (*.bmp *.dds *.gif *.jpg *.jpeg *.png *.psd *.pspimage *.tga *.thm *.tif *.tiff) do ( echo f | xcopy "%%a" "%HOMEDRIVE%%HOMEPATH%\Desktop\LDR\Images\Bitmap\%%~nxa" /q /y /g /c ) Revised (broken) Code using copy: for /r %%a in (*.bmp *.dds *.gif *.jpg *.jpeg *.png *.psd *.pspimage *.tga *.thm *.tif *.tiff) do ( copy "%%a" "%HOMEDRIVE%%HOMEPATH%\Desktop\LDR\Images\Bitmap\%%~nxa" /d /y /z ) Output: The system cannot find the path specified. 0 files copied. I know that it seems everyone uses either xcopy or robocopy but can anyone help with copy? Note: I'm using Batch to keep it very lightweight and command-line accessible.

    Read the article

  • How to create VHD disk image from a Linux live system?

    - by Federico
    Once more, I have to resort at the experts here at SuperUser, as my other sources (mainly Google ;-)) didn't prove very helpful... So basically, I would like to create a VHD image of a physical disk to be archived/accessed/maybe even mounted in a virtual machine. Now, there are dozens of articles and tutorials on how to do that on the web, but none that meets exactly the conditions I would like to achieve: I would like the destination file to be a VHD image, as Windows 7 can mount it natively, even over the network and many other programs can use it (VirtualBox, ...) The disk I'm trying to image contains a Windows XP install, so in theory, I could use the disk2vhd utility, but I would like to find a solution that doesn't require booting that Windows XP install (ie keep the disk read-only) Thus I was searching for a solution involving some sort of live system (running from a USB stic or the network) However, all the solutions that I've came across either make use of disk2vhd or use the dd command under linux, which does a complete copy of the disk (ie even empty blocks) and does not output a VHD file. Is there a tool/program under Linux that can directly create a VHD file? Or is is possible to convert a raw disk image created using dd to a VHD file, without allocating space for the empty blocks? How would you proceed? As always, any advice or comment is highly appreciated!!

    Read the article

  • Can't seem to get python to work

    - by Justin Johnson
    I'm just starting out in Python. The Python interpreter works from the command line (I have 2.4.3), but I can't seem to get Apache to execute Python scripts. All I end up with is a blank screen and nothing in the Apache error logs. I enabled Python via the Plesk control panel. Here's the snippet that was generated in the httpd.include: <Files ~ (\.py$)> SetHandler python-program PythonHandler mod_python.cgihandler </Files> My test script is one of the examples that comes with the Python downloads at http://python.org/download/ #!/usr/local/bin/python """CGI test 1 - check server setup.""" # Until you get this to work, your web server isn't set up right or # your Python isn't set up right. # If cgi0.sh works but cgi1.py doesn't, check the #! line and the file # permissions. The docs for the cgi.py module have debugging tips. print("Content-type: text/html") print() print("<h1>Hello world</h1>") print("<p>This is cgi1.py") That wasn't working, so I changed #!/usr/local/bin/python to #!/usr/bin/python which is what which python tells me but the results were the same. Like I said, I'm ending up with a blank page. No errors that I know of, unless I'm checking the wrong error log (I'm checking the Apache error log). I'm on a MediaTemple (dv) running CentOS.

    Read the article

  • make local only daemon listening on different interface (using iptables port forwarding)?

    - by UniIsland
    i have a daemon program which listens on 127.0.0.1:8000. i need to access it when i connect to my box with vpn. so i want it to listen on the ppp0 interface too. i've tried the "ssh -L" method. it works, but i don't think it's the right way to do that, having an extra ssh process running in the background. i tried the "netcat" method. it exits when the connection is closed. so not a valid way for "listening". i also tried several iptables rules. none of them worked. i'm not listing here all the rules i've used. iptables -A FORWARD -j ACCEPT iptables -t nat -A PREROUTING -i ppp+ -p tcp --dport 8000 -j DNAT --to-destination 127.0.0.1:8000 the above ruleset doesn't work. i have net.ipv4.ip_forward set to 1. anyone knows how to redirect traffic from ppp interface to lo? say, listen on "192.168.45.1:8000 (ppp0)" as well as "127.0.0.1:8000 (lo)" there's no need to alter the port. thanx

    Read the article

  • SQL Server 2005 SP3 on Windows 7 - No Management Studio

    - by Mike Thomas
    I've been trying for a day and a half now to get SQL Server 2005, DEV edition, to work on Windows 7, 64 bit prof. I install from the disk, then run SP 3. I get a failure on the Client Components section of the Installation Progress along with this vague message - Product : Client Components Product Version (Previous): 1399 Product Version (Final) : Status : Failure Log File : C:\Program Files\Microsoft SQL Server\90\Setup Bootstrap\LOG\Hotfix\SQLTools9_Hotfix_KB955706_sqlrun_tools.msp.log Error Number : 1712 Error Description : MSP Error: 1712 One or more of the files required to restore your computer to its previous state could not be found. Restoration will not be possible. I've uninstalled all Visual Studio and tried to make this as clean as possible, and have read a lot of the blog posts, but am really at my wits end about this. I am not a DBA, but I use SQL Server all the time when coding and testing apps. Does anyone have any ideas as to where I can get this sorted out? I've been ati this for a long time and have never encountered an installation as bad as this one. Thanks Mike Thomas

    Read the article

  • DMG mounting warning message says "it may make computer less secure or cause other problems"

    - by Cawas
    When I try to open a DMG file I get this: I'll just transcript the image: There may be a problem with this disk image. Are you sure you want to open it? Opening this disk image may make your computer less secure or cause other problems. What does that mean in fact? What's really wrong with it, and what kind of problem can it cause just by mounting? Someone said: When you download a file in Leopard (and Snow Leopard), it's marked as a quarantined file. This occurs by the OS adding an attribute to the file, tagging where it came from (such as "downloaded by Safari"). This is what causes the user to see prompts when running files that were downloaded from the Internet, you may remember being asked to confirm you'd like to launch program XXX downloaded by Safari on XXX date. As a new part of Snow Leopard, files which are tagged with the quarantine attribute also have integrity checked by fsck, and if that verify fails you will see the message you described, triggered by an unused node in the disc image. But really, I didn't get that. What's quarantine? I've just downloaded a file here on SL, tried to open, and got that warning. Apple have a say about quarantine files, and they seem to work the same on Leopards. Plus I have got that file using Google Chrome while that feature seems to work just with Safari.

    Read the article

  • Apache start failing after apache config modifications, showing syntax error, cannot load php5apache2_2.dll into server

    - by Sandeepan Nath
    I am stuck again with apache setup guys. I am working on a Windows 7 system. I copied the working php5 installation directory from teammates, copied the necessary .dll files from inside php5 installation folder (like they were in the working setup of teammates) to my windows/system32/. Apache server started successfully with the default apache config file. I was able to access localhost in browser. But php code was not parsing. I noticed no such line like the following in the apache config file:- # PHP5 module LoadModule php5_module D:/php5/php5apache2_2.dll If I add this line, apache server start fails. Running test configuration gives the following error - httpd.exe: Syntax error on line 60 of C:/Program Files (x86)/Apache Software Foundation/Apache2.2/conf/httpd.conf: Cannot load D:/php5/php5apache2_2.dll into server: The specified procedure could not be found. But the dll file is there in the specified location and I have given all permissions to the current system user to the php5 installation directory. The same line also appears in the apache error log, though I am not sure when exactly logs are written to the log file. I am confused if log entries are not made if I have opened the log file for reading? lol ... because I could not observe a pattern in when entries are made. I saw some log entries being made, some not. Oh, why is apache setup such a headache always????

    Read the article

  • Why can't Logman start?

    - by Bill Paetzke
    I'm setting up my first logman counter. But it's not working! There is some file or folder permissions problem. Or maybe I wrote the create-counter statement wrong. Here's my counter commands: logman create counter BillTest -si 30 -v nnnnnn -max 200 -o "C:\Temp" -c "\Processor(*)\*" "\Memory(*)\*" "\LogicalDisk(*)\*" logman start BillTest The first command works. It says counter creation successful. The second command fails: Collection "BillTest" did not start, check the application event log for any errors Here's the error in the Event Viewer: The service was unable to open the log file C:\Temp_000001.blg for log BillTest and will be stopped. Check the log folder for existence, spelling, permissions, and ensure that no other logs or applications are writing to this log file. You can reenter the log file name using the configuration program. This log will not be started. The error returned is: Access is denied. I verified that C:\Temp exists. I'm not a permissions guru, but I did set all the accounts in the security tab of that folder to "full control." Still, the logman start command failed with the same error. I noticed that it was trying to write to C:\Temp_000001.blg instead of C:\Temp\000001.blg. That might be part of the problem. So, I tried to update my counter to "C:\Temp\" instead of "C:\Temp", but that failed with a path-invalid error. Also, all the examples I saw online used did not put a trailing slash. So, no dice there. I tried this on my machine (Windows XP) and my dev server (Windows Server 2003). Both failed with the same error. How can I fix this?

    Read the article

  • Prevent Roaming profiles from syncing certain elements

    - by user29919
    Hello everyone, I'm somewhat new to the Server 2008 front, and I'm afraid I've hit my first snag: I've set up roaming profiles, and they appear to be working too well. Is there a way to limit, ideally on a folder/object basis, what gets synced with a roaming profile? What I'm trying to do is: 1) stop my roaming profile from syncing desktop layout - I run a dual-screen desktop and a laptop, and it's really annoying to have to reposition everything after logging onto the laptop, because it forces everything onto one screen. 2) stop it from syncing registry variables - specifically, I want Visual Studio to load different setting files on each computer. Currently, the variable that contains that path is getting synced whenever I log in, so I get the settings from whatever box I last logged out from. 3) stop syncing the start menu - this one's not as big, but I'm noticing 'program not found' icons even for programs that are installed. they work when I click them - they just look ugly. I'm running Windows SBS 2008 x64 with two Win7 clients (x86 Pro, and X64 Ultimate). Is there a simple way to do that? Or am I trying to work too much against what roaming profiles are designed for? I could, of course, set up different profiles for the desktop and laptop, but that seems to defeat the point of roaming profiles entirely... Thanks in advance! Any help will be much appreciated =)

    Read the article

  • Symantec Endpoint Protection Virus Definitions

    - by Gus Denton
    I have done some Googling but I cannot get a definitive answer certainly not from the Symantec KB. I have a Virtualised Win 2003R2 server 32bit. It has been provisioned to me with Symantec Endpoint Protection 11.0.62xxx CLIENT (not a definitions server) the directory C:\Program Files\Common Files\Symantec Shared\VirusDefs is 750MB IT doesn't contain .tmp directories so it is NOT a corrupt definitions server. IT does contain directories named with a date pattern YYYYMMDD.xxx Some of these folders are 12 months old and I would like to recover the space. The sysmantect forums are full of this stuff but a lot of the postings contain links back to documents that are not specific to End Point Protection Client. It appears that I should be able to delete the older folders and all will be OK. with a service restart however there is a warning about having Live Update Administrator Installed Firstly I have no idea if I have this installed how to I check and secondly can I just ditch these old files and restart ? Regards Gus Denton Learning and Teaching Uni of New South Wales Sydney Australia For those trying to assist me I thankyou. I have followed some instructions found on the Symantec site and assumed that the response from Nixphoe would resolve my issue. It appears that as I am on a provisioned VM from a central IT unit I cannot run the Symantec commands from the Run prompt as my admin creds to get me in. (smc -stop) Basically I need to claw back some Diskspace from the c: drive which is being filed up with WSUS patches and Symantec files. I have managed to delete one symantec cache through the live update control panel and recovered 470Mb I suppose my last question for those more experienced than myself is, can I simply remove say the two oldest virus definition folders without completely foobaring the End Point protection and the server ? Regards Gus

    Read the article

  • Setup for a live (low-latency) audio video broadcast over Wi-Fi?

    - by Majal Mirasol
    The Upgrade We are capturing audio (from mixer) and video (from a camera) from a main auditorium and passing it to separate rooms within the building. We used to have done this via manual audio/video cables and wires. We wanted to "upgrade" the system and wirelessly broadcast the stream via Wi-Fi. The Problem In our current setup (Wirecast running on A10 on a Wireless-N network), we have the problem of delay. Our streams are delayed from a minute up to five minutes on the clients (laptop/iPad/Android). This had not been a problem from the previous wired connections. Since the wireless network is local, we thought that a delay of less than a second should be achievable. Our Question And so it goes. Anybody there who has any experience for a setup that has both low latency and at the same time user-friendly to clients streaming in the program? Any recommendations would be highly appreciated. (Our current setup in on Windows 7, but setup on a dedicated Linux box is preferred, if achievable.)

    Read the article

  • Understanding mail failure notices, 554

    - by goran
    I'd like to confirm the meaining of a mail failure notice. Here's the message Hi. This is the qmail-send program at mydomain.com I'm afraid I wasn't able to deliver your message to the following addresses. This is a permanent error; I've given up. Sorry it didn't work out. <[email protected]>: 1.2.3.4 does not like recipient. Remote host said: 554 <[email protected]>: Relay access denied Giving up on 1.2.3.4 The way I understand this is, that 1.2.3.4 is not setup to receive mail for this domain. dig domain.com MX shows ;; ANSWER SECTION: domain.com. 6245 IN MX 10 mail.domain.com. domain.com. 6245 IN MX 20 mx.anotherdomain.com. (1.2.3.4 is mx.anotherdomain.com.). The puzzling part is that I have reports that messages sent from gmail get delivered to this address. P.S. Is this a proper question for serverfault?

    Read the article

  • Trying to use a SmartHost with my Exchange 2010 server

    - by Pure.Krome
    Hi folks, I'm trying to use a SmartHost with my Exchange 2010 Server. SmartHost details: Secure SMTPS: securemail.internode.on.net 465 <-- Note: that's port 465 Configure your existing SMTP settings (in your email program) to: use authentication (enter your Internode username and password, enter your username as [email protected]). enable SSL for sending email (SMTPS). So I've added the smart host details to my Org Config -> Hub Transport. I then used PowerShell to add the port:- Set-SendConnector "securemail.internode.on.net" -port 465 I've then added my username/password (as suggested above) to the SmartHost as Basic Authentication (with no TLS). Then I try sending an email and I get the following error message :- 451 4.4.0 Primary target IP address responded with: "421 4.4.2 Connection dropped due to ConnectionReset." So i'm not sure how to continue. I also tried ticking the TLS box but stll I get the same error. If i don't use SMTPS (secure SMTP, on port 465) and use basic SMTP on port 25 with no Authentication, email gets sent. Any ideas? EDIT: Btw, I can telnet to that server on port 465 from my mail server .. just to make sure i'm not getting firewall'd, etc.

    Read the article

< Previous Page | 697 698 699 700 701 702 703 704 705 706 707 708  | Next Page >