Search Results

Search found 23890 results on 956 pages for 'issue'.

Page 490/956 | < Previous Page | 486 487 488 489 490 491 492 493 494 495 496 497  | Next Page >

  • iTunes randomly plays songs while importing, and can't be stopped

    - by Steve Bennett
    I'm importing a gazillion songs over the network into iTunes. Every now and then, it starts playing the song it's currently importing. And because iTunes is basically frozen up during the import process, I can't actually stop it. Then it will suddenly jump to another song a bit later on. Pretty irritating. Is it a known issue? Anything I can do about it? Versions (oops): iTunes 10.5 (141), OS X 10.6.8

    Read the article

  • Glue Records creation

    - by FFrewin
    I need some information on the following issue, as I would like to have it clear on my mind. I have a VPS server. All my sites hosted on this VPS are using as NameServer .gr domain, like ns1.greekdomain.gr & ns2.greekdomain.gr . The .gr domain name is a domain I own with a greek registar. Now, I want to move 2 websites with .co.uk domain names to my VPS. The co.uk domain names are registered with a UK based registar. When I went in the domain management panel, I did changed the nameservers of my domains to my ns.greekdomain.gr ns. However the panel returns an error about invalid nameservers. After digging, I found that my nameservers are not valid because they do not exist as records in the .co.uk registry. And here it starts my big trouble. The .co.uk registart tells me that I have to ask my hosting provider / .gr registar to create a new record to the .uk registry for my nameservers. The .gr registar tells me that my uk registar needs to create a new record for my ns. From Nominet (.co.uk) registry, the one employee tells me that I need to ask my uk registar, the other employee (seemed to not understand what I was asking) told me that they cannot change my nameservers for me, and she told me to contact anyone else (old hosting provider, uk registar, .gr registar) to help me with that. I can't find help from nobody. I try since the last week to transfer my websites to my VPS and I can't. So, the question is who is responsible and who is able to create glue records for my nameservers ?

    Read the article

  • md5sum of large files gives different results sometimes

    - by Emanuele
    I have an AMD quad core, 8 gb RAM, 1 SSD EXT2 (2 months old), 2 HDD EXT4, approximately 1 year old. I'm using Ubuntu 10.04 x86-64 and when I compute the md5sum of large files (9 GB) sometimes I get different values than the one stored on a reference file. Upon restarting and switching off the PC then I get the expected results no matter how many times I repeat it. But this is random. I've turn on ECC (the fastest possible settings) and the issue seems to be rarer, but I've run memtest86+ for 6+ hours without a glitch (and with ECC off!). Any idea? Should I update the BIOS of my motherboard (Asus EVO-something...don't remember it now)? I've tried all the rest apart this, but genuinely don't know what to do anymore... Any suggestion is appreciated!

    Read the article

  • Able to ping but does not get the data

    - by Dany
    I am facing a problem in my client server program; when using wireless I can ping but not receive any data. There is a source which receives a streaming request from client via server. This works fine when all the machines are connected through LAN cable but when I put all the computers in wi-fi network, all the machine are able to ping each other but when the client send the stream request to the server the ping request between server and client says destination unreachable. It works all well until the client does not send the streaming request. What might be the issue?

    Read the article

  • How do I throttle a command in a terminal window?

    - by To Do
    I needed to run convert with a lot of images at the same time. The command took quite a while but this doesn't bother me. The issue is that this command rendered my computer unusable while the command was running (for about 15 minutes). So is it possible to throttle the command by limiting resources (processor and memory) to the command, directly from the command line? This can only work if I add something to the same line before pressing Enter because once I start the process the computer slows so much that it is impossible for example to switch to "System monitor" and reduce priority. Edit: top and iotop results I managed to run top and sudo iotop >iotop.txt while doing one of these convert operations. (The iotop.txt file produced is difficult to read) Results of top: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 14275 username 20 0 4043m 3.0g 1448 D 7.0 80.4 0:16.45 convert Results of iotop: [?1049h[1;24r(B[m[4l[?7h[?1h=[39;49m[?25l[39;49m(B[m[H[2JTotal DISK READ: 1269.04 K/s | Total DISK WRITE:[59G0.00 B/s (B[0;7m TID PRIO USER DISK READ DISK WRITE SWAPIN(B[0;1;7m IO(B[0;7m COMMAND [3;2H(B[m2516 be/4 username 350.08 K/s 0.00 B/s 0.00 % 0.00 % zeitgeist-datahub 7394 be/4 username 568.88 K/s 0.00 B/s 77.41 % 0.00 % --rendere~.530483991[5;1H14275 idle username 350.08 K/s 0.00 B/s 37.49 % 0.00 % convert S~f test.pdf[6;2H2048 be/4 root[6;24H0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/3:2] [5G1 be/4 root[7;24H0.00 B/s 0.00 B/s 0.00 % 0.00 % init Furthermore, even after the process ends, the computer does not return to the previous performance. I found a way around this by running sudo swapoff -a followed by sudo swapon -a

    Read the article

  • Ubuntu 12.04 - default Radeon driver does not work at all

    - by mumble
    I've recently upgraded to 12.04 LTS and I have an ATI Radeon HD5670. I've heard that the open source 'Radeon' driver is used by default. However, it wasn't showing anything for me. What I did was I added the 'nomodeset' option to boot up and install fglrx. But it didn't work well for me as it introduced a lot of problems (freezes/glitches). So I removed/purged fglrx and am planning to use the open source drivers instead. So my question is this: Why is my default Radeon driver not working? Is anyone having a similar issue? I've also tried using the ubuntu-x-swat driver by running the ff commands: sudo add-apt-repository ppa:ubuntu-x-swat/x-updates sudo apt-get update But the result was the same as the Radeon driver. Nothing shows up on system boot. Any ideas? Thanks in advance! Update Running lspci -nn | grep VGA gives me the following: 02:00.0 VGA compatible controller [0300]: Advanced Micro Devices [AMD] nee ATI Redwood [Radeon HD 5670] [1002:68d8]

    Read the article

  • Why does my root filesystem keep becoming read-only?

    - by Scott Severance
    I've lately been having an issue with my root filesystem becoming readonly. It happens some amount of time after boot. I don't know exactly when it happens, as I don't usually notice it until something such as suspending the computer or printing fails. It seems to be fairly random. Since most of my system is on that partition, I can't re-mount it without rebooting. After this happens, the system runs a fsck. Sometimes it prompts to fix problems; other times it apparently finds none. To troubleshoot, I've searched through the logs but found nothing relevant. This might be due in part to not knowing when the actual errors took place. The filesystem is apparently good to begin with, as when fsck runs its fixes it doesn't report any errors. I've scanned the disk with SpinRite. A while ago, SpinRite found and recovered from some bad sectors on the hard drive. I ran a level 4 scan (a thorough scan) after this probem appeared, but SpinRite found nothing. The SMART data reports that the disk is OK with 63 bad sectors. The number of bad sectors hasn't changed recently. I realize that the disk isn't in the best of conditions, and I have complete backups in case of catastrophic failure. Yet the lack of errors in the logs, combined with SpinRite's test results and the unchanged SMART data makes me think that this problem has some cause other than disk failure. Other than disk failure, what could cause my symptoms?

    Read the article

  • Is there a free tool/package that can monitor web traffic and display URLS accessed? [closed]

    - by Anthony
    I couldn't find a similar question but then maybe I am searching for the wrong terms. A few years ago I used a router like device, I'm pretty sure it was a SonicWall, that did this on a clients site. Basically all traffic would be routed through this device and it allowed the manager/administrator to inspect web usage of the workers, determine how often certain resources were accessed and block them if necessary (much like content filter). It showed reports based on domain name reached etc. Facebbok.com, Bebo.com and so on. It also displayed the usual IP traffic information etc. it was a UTM also. I have tried Endian firewall, with it's NTOP install, but I don't think that will show URLs browsed. Maybe I just haven't found it in NTOP yet? I need this to troubleshoot connection and traffic issue at my home, with about twenty devices/users so didn't want to buy a dedicated solution and have spare hardware to use a community product.

    Read the article

  • Cannot get nscd to run. DNS cache stale as a result

    - by Phunt
    I'm trying to troubleshoot an issue on a MediaTemple server (running CentOS5) where the DNS cache has grown stale - I think because nscd has crashed. I've tried restarting nscd: # service nscd restart Stopping nscd: [FAILED] Starting nscd: [ OK ] This makes sense since I believe nscd has crashed so it shouldn't already be running, but When I view the status of nscd: # service nscd status nscd dead but subsys locked And ps -A returns no processes related to nscd (I assume because it's dead). I've edited /etc/nscd.conf and uncommented the line that defines the location for the log file. It created the file but it never writes anything to it. I tried looking at the init script but found that it's no help since the script thinks everything is running fine - the service returns that it started up correctly. How do I 'unlock' the subsys that nscd is complaining about?

    Read the article

  • Unable to connect to remote MS SQL Server 2008 Express SP3 instance by name

    - by Max
    I am trying to connect to a remote MS SQL Server 2008 SP3 x86 Instance using it's name. At the first glance all seems to work well (e.g. it is possible to connect to the server locally and succesfully telnet it's port remotely), but there is a thing I can't understand... This line should connect us to the default instance of remote SQL Server: osql -S ServerIP -d MyDatabase /U sa -P MyPassword and it does the trick, however the next one: osql -S ServerIP\MyInstance -d MyDatabase /U sa -P MyPassword ends up with the following error: [SQL Native Client]SQL Network Interfaces: Error Locating Server/Instance Specified [xFFFFFFFF]. [SQL Native Client]Login timeout expired [SQL Native Client]An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections. The only instance running on the server is MyInstance, which is (I guess) the default one. Could you please put some time in explaining the issue.

    Read the article

  • SSL client auth in nginx with multiple server section

    - by Bastien974
    I want to implement ssl_verify_client in nginx. This works perfectly when I only have one server section, which listen to 443. In my case I have multiple, all listening on 443 but to different server_name. For one particular server (proxy.mydomain.com), I'm adding the SSL client verify, but when I test the connectivity with openssl s_client -connect proxy.mydomain.com:443 -cert xxx.crt -key xxx.key and then do a GET / HTTP/1.1 host: proxy.mydomain.com It's not working, 400 No required SSL certificate was sent I think nginx is not receiving the proper server_name and is directing it to the first server listening to 443. So I tried to listen on another port and it worked right away. What's the issue and how can I fix it ?

    Read the article

  • Can ping localhost but can't browse

    - by Anna
    I know this is a pretty common question but I did my research and couldn't find a solution for this issue. I'm configuring a development application server and I came to the point where I can ping both localhost and 127.0.0.1, but I cannot browse either of them from IE or Firefox. I can browse and ping other websites (such as google) just fine. I tried flushing the dns (ipconfig /flushdns), restarting the IIS Admin service, restarting IIS itself, etc, and nothing seems to work. The results from ipconfig /all shows IP Rounting Enabled = No and WINS Proxy Enabled = No. Hwat is intriguing to me is that I compared everything in IIS in the dev environment with the production environment and the settings are the same, but I can browse localhost in production, but not in dev! What could be causing the inability to browse localhost and 127.0.0.1 from IE and Firefox?

    Read the article

  • Windows Firewall rule based on domain name instead of IP

    - by DennyDotNet
    I'm trying to allow a service to a set of machines via Windows Firewall. I'd like to add my home machine to the firewall but my home machine has a dynamic ip address. I use dyndns so that I have a hostname which I can always connect to. So I'm trying to see if there is a way I can use my hostname instead of an IP. Thanks Update Let me add a little more information, perhaps there are other ways to resolve my issue. The server is a web server hosted by RackSpace. I only want to allow RDP access from my work (static IP, so no problem) and home (dynamic). My home IP doesn't change too often, just often enough to annoy me. So maybe there is a better way to do this... maybe VPN?

    Read the article

  • MacBook repeatedly disconnects from Wi-Fi

    - by redwall_hp
    I have an early 2008-model MacBook (2.4 GHz). The Wi-Fi router I have at home is a Linksys WRT54GX2 that I have had for a few years. My MacBook has recently started disconnecting from the router every few minutes, which is rather annoying. I can reconnect again without having to restart the router or anything, as it seems that the MacBook is just dropping the connection. I have tried changing the channel on the router, and upgrading the laptop from Leopard to Snow Leopard made no difference either. I'm only about six feet from the Linksys device, so distance isn't an issue. This only happens with the Linksys router, while I can use the local library's open network without any issues. The problem also seemingly becomes more pronounced after midnight. What could the problem be? Edit: Here are the logs that Spiff requested: http://pastie.org/951761

    Read the article

  • Screencast not producing files

    - by JohnS
    I'm using Gnome 3 on 12.04 and trying to create a screencast. I start the screencast using the Ctrl-Alt-Shift-R shortcut and the red light appears in the bottom right corner. I go about my business then press the key combination again when done. The problem is that the screencast file gets generated maybe 1 out of 10 times. Is there a log file I can look at to determine the issue? How about a settings file? UPDATE: I did some additional testing. What's happening is that the screencast does work but it appends the new video to the existing file. Even if the file is renamed or moved to trash. Emptying trash does not create a new file either. Not sure where the video gets recorded to then. The only reliable way I've found to have a new file created is to log out of the session and log back in. Is this expected behaviour? Is there a way to force screencast to create a new file every time Ctrl-Alt-Shift-R is pressed?

    Read the article

  • Plesk 9 - unable to modify atmail vhost template

    - by Ben
    Running into a small issue recently that causes my server's atmail to fail authenticating users. I gathered from a web search that it's because i recently enabled apc on the server. I've found some reference mentioning I need to modify the atmail vhost template, but that reference is for Plesk 10 (i'm on 9). The atmail config isn't in the same spot. I've found this unrelated topic that explains how to modify the vhost settings for atmail on plesk 9, which I have done (adding php_admin_flag apc.enabled off to it). I then recompiled the server config using /usr/local/psa/admin/bin/websrvmng -a but it doesn't seem to pick up the changes. If I look at /etc/httpd/conf.d/zzz_atmail_vhost.conf after recompiling it still doesn't show the apc settings. Summary of steps taken: Modified /etc/psa-webmail/atmail/atmail_vhost.conf and added php_admin_flag register_globals off to the config Recompiled with /usr/local/psa/admin/bin/websrvmng -a Checked /etc/httpd/conf.d/zzz_atmail_vhost.conf But no changes. What am I missing?

    Read the article

  • unable to send mail from postfix on Ubuntu 12.04

    - by gilmad
    I'm trying to send an email through Google from my localhost. (via PHP5.3) But Google keeps on blocking my requests. I tried to follow the solutions given to a few similar questions, but for some reason they do not work. I followed these instructions to configure it - http://www.dnsexit.com/support/mailrelay/postfix.html Now for the config data: my main.cf file looks like that: relayhost = [smtp.gmail.com]:587 smtp_fallback_relay = [relay.google.com] smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = my sasl_passwd looks like that: [smtp.gmail.com]:587 [email protected]:password and that is how the mail.log rows look like: Dec 14 10:24:50 COMP-NAME postfix/pickup[5185]: 1C3987E0EDD: uid=33 from= Dec 14 10:24:50 COMP-NAME postfix/cleanup[5499]: 1C3987E0EDD: message-id=<[email protected] Dec 14 10:24:50 COMP-NAME postfix/qmgr[5186]: 1C3987E0EDD: from=, size=483, nrcpt=1 (queue active) Dec 14 10:24:50 COMP-NAME postfix/smtp[5501]: 1C3987E0EDD: to=, relay=smtp.gmail.com[173.194.70.109]:587, delay=0.61, delays=0.19/0/0.32/0.1, dsn=5.7.0, status=bounced (host smtp.gmail.com[173.194.70.109] said: 530 5.7.0 Must issue a STARTTLS command first. w3sm8024250eel.17 (in reply to MAIL FROM command)) Dec 14 10:24:50 COMP-NAME postfix/cleanup[5499]: C20677E0EDE: message-id=<[email protected] Dec 14 10:24:50 COMP-NAME postfix/bounce[5502]: 1C3987E0EDD: sender non-delivery notification: C20677E0EDE Dec 14 10:24:50 COMP-NAME postfix/qmgr[5186]: C20677E0EDE: from=<, size=2532, nrcpt=1 (queue active) Dec 14 10:24:50 COMP-NAME postfix/qmgr[5186]: 1C3987E0EDD: removed

    Read the article

  • Problem opening password encrypted .docx file on Word 2003

    - by molecule
    Hi all, I am having a problem opening a .docx file on my Word 2003. I have installed the Compatibility pack for 2007 but when i try to open this particular file, I receive the error "Word experienced an error trying to open the file. Try these suggestions. 1. Check the file permissions for the document, 2. Make sure there is sufficient free memory and disk space, 3. Open the file with the Text Recovery converter. I do not think it is any of the errors as I am able to open it on a different PC with Word 2003 as well. I also do not have any issues opening any non-password encrypted .docx files. Has anyone experienced the same issue? Most posts on the internet relate to "open and repair" but as mentioned, I am able to open this file on another PC without any problems. Any advice is greatly appreciated. Thanks, George

    Read the article

  • High memory utilization by sqlservr.exe process

    - by abdul samad
    Sub:High memory utilization by sqlservr.exe process. When I look into task manager --processes or by using perfmon memory counters(Sqlserver:memory manager:Target server memory and Total server memory) I am getting high memory utilization by sqlservr.exe process nearly 8 GB (Target server memory counter) and 7.95 GB (Total server memory). and when I restart the MSSQLSERVER service it again shoots up to the same size. I am getting this issue quite frequently. Please help me out in identifying why sql server is using so much memory and how to find out what query , stored procedure etc is making sql server use that much memory. * I am not using any triggers or cursors in my code. Thanks

    Read the article

  • Solving &ldquo;XmlSchemaException: The global element '&lt;elementName&gt;' has already been declare

    - by ChrisD
    I recently encountered this error when I attempted to consume a new hosted WCF service.  The service used the Request/Response model and had been properly decorated.  The response and request objects were marked as DataContracts and had a specified namespace.   My WCF service interface was marked as a ServiceContract and shared the namespace attribute value.   Everything should have been fine, right? [ServiceContract(Namespace = "http://schemas.myclient.com/09/12")] public interface IProductActivationService { [OperationContract] ActivateSoftwareResponse ActivateSoftware(ActivateSoftwareRequest request); } well, not exactly.  Apparently the WSDL generator was having an issue: System.Xml.Schema.XmlSchemaException: The global element 'http://schemas.myclient.com/09/12:ActivateSoftwareResponse' has already been declared. After digging I’ve found the problem; the WSDL generator has some reserved suffixes for its entities, including Response, Request, Solicit (see http://msdn.microsoft.com/en-us/library/ms731045.aspx).  The error message is actually the result of a naming conflict.  The WSDL generator uses the namespace of the service to build its reserved types.  The service contract and data contract share a namespace, which coupled with the response/request name suffixes I was using in my class names, resulted in the SchemaException. The Fix: Two options: Rename my data contract entities to use a non-reserved keyword suffix (i.e.  change ActivateSoftwareResponse to ActivateSoftwareResp). or; Change the namespace of the data contracts to differ from the service contract namespace. I chose option 2 and changed all my data contracts to use a “http://schemas.myclient.com/09/12/data” namespace value. This avoided a name collision and I was able to produce my WSDL and consume my service.

    Read the article

  • Thoughts on exception handling.

    - by AndyScott
    Was working on a windows form app (something I haven't done in a while), adding threading and logging so that it would work a little more smoothly and have a record of who did what.  I was just about at the point where I was going to check it into source control when I noticed that the Output window was showing "A first chance exception of type 'System.InvalidCastException' occurred in mscorlib.dll", so I googled it.  In reading some threads about the error, I came across the following comment and it got me thinking: "In addition, while they should be avoided if possible, exceptions are a quite legitimate part of program execution. It's their going unhandled that is a real issue, because that means crashy, crashy." How do you normally use exception handling?  I feel that exceptions are intended to handle errors in code (in my experience generally related to bad data making its way into the system).  Now don't get me wrong, I understand that exceptions happen and should be dealt with, but I feel that they are a "last resort" to keep a program from crashing, but should never be a way to pass data or continue logical processing that could be handled in standard code flow. I mention this, because I have seen it done. What do you think?

    Read the article

  • IRQ Conflicts Causing Video Card and Boot Problems?

    - by sanpatricio
    tl;dr - I have 4 devices sharing 1 IRQ. Is this bad and how do I tell the BIOS to stop it? Background: I have an old Dell GX280 dual Pentium 4 that I (semi) resurrected last weekend with an installation of Ubuntu 12.04. Everything was going fine the first several hours until a problem that plagued me when WinXP was on that machine happened -- it froze. Completely froze. None of the myriad of ways I have found here on askubuntu helped me to regain control except a long-press of the power button to shut it off. Clearly, this wasn't a software/WinXP issue. After much googling, I found that hardware conflicts can often cause this sort of total lock-up and with all the odd blocks of yellow and flecks of color showing on my screen (both WinXP and Ubuntu) I figured my old GeForce 7600 was failing and causing me these odd issues. (A good canned-air dusting of the entire interior fixed the color fleck problem) Again, through much googling and numerous answers found on askubuntu, I somehow stumbled my way onto the lshw command. After going through it, line by line, I found that I have four devices sharing IRQ 16: eth0, wlan0, ide0 (DVD-RW), and my video card. In hindsight, I can recall weird instances of my Ethernet connection to another computer not working when I thought it should. I never full troubleshot those issues so it could be a coincidence. The other thing that has been plaguing me since installing Ubuntu (wasn't there during WinXP) has been periodic moments of my monitor getting no signal from Ubuntu during boot. The first couple days, it would disappear after the Dell boot screen and reappear at Ubuntu login. Now, it disappears after the Dell boot screen and doesn't return at all -- I have to hit F12 where I can load a safe mode version of Ubuntu and get more details like dmesg and lsdev. I also ran memtest86 overnight and woke up to zero errors, so failing RAM is out. Where do I go from here?

    Read the article

  • How to Assure an Effective Data Model

    As a general rule in my opinion the effectiveness of a data model can be directly related to the accuracy and complexity of a project’s requirements. For example there is no need to work on very detailed data models when the details surrounding a specific data model have not been defined or even clarified. Developing data models when the clarity of project requirements is limited tends to introduce designed issues because the proper details to create an effective data model are not even known. One way to avoid this issue is to create data models that correspond to the complexity of the existing project requirements so that when requirements are updated then new data models can be created based any new discoveries regarding requirements on a fine grain level.  This allows for data models to be composed of general entities to be created initially when a project’s requirements are very vague and then the entities are refined as new and more substantial requirements are defined or redefined. This promotes communication amongst all stakeholders within a project as they go through the process of defining and finalizing project requirements.In addition, here are some general tips that can be applied to projects in regards to data modeling.Initially model all data generally and slowly reactor the data model as new requirements and business constraints are applied to a project.Ensure that data modelers have the proper tools and training they need to design a data model accurately.Create a common location for all project documents so that everyone will be able to review a project’s data models along with any other project documentation.All data models should follow a clear naming schema that tells readers the intended purpose for the data and how it is going to be applied within a project.

    Read the article

  • Login sometimes failing immediately after restoring a database

    - by Ian Ringrose
    We have a set of automated tests that restore a database then run some .net code against the database. Sometimes after the database is restored the login from Ado.net fails. If I re-run the test, then the restore and login works OK. The restored database looks OK when viewed with Management Studio. This is only an problem on some machines. We are using SQL server 2008. Is there a known issue with a database restore “returning “ a very short time before the restored database is up and running.

    Read the article

  • Team City Backup from Rest API doesn't backup

    - by ChrisFletcher
    I'm using the REST API in Team City 6.5 to request a backup is created as follows: http://teamcity:8111/httpAuth/app/rest/server/backup?includeConfigs=true&includeDatabase=true&includeBuildLogs=true&fileName=filename as specified in the documentation here: http://confluence.jetbrains.net/display/TW/REST+API+Plugin#RESTAPIPlugin-DataBackup The problem is that it simply returns "idle", as does the backup status call and no backup appears. I can backup fine from the web interface and I can return a list of users from the REST API without issue also. I'm starting to suspect there is some sort of permission or config option but I can't find one.

    Read the article

< Previous Page | 486 487 488 489 490 491 492 493 494 495 496 497  | Next Page >