Search Results

Search found 23890 results on 956 pages for 'issue'.

Page 695/956 | < Previous Page | 691 692 693 694 695 696 697 698 699 700 701 702  | Next Page >

  • CSS gradient not rendering in Windows Phone 8 WebBrowser Control

    - by SRSHawk
    I am facing an issue where, the CSS3 background is not rendered in WebBrowser control in Windows Phone 8. But same HTML when opened in WebBrowser in Windows Phone 8, it renders the gradient The HTML I am using is: <html> <head> <meta name="viewport" content="width=320, user-scalable=no, minimum-scale=1, maximum-scale=1"/> </head> <body style="margin:0px;overflow:hidden;"> <div id="im_c" style="height:48px;width:100%25; background: -ms-linear-gradient( bottom, #432100 30%, #00AAAA 70%);"> <div style="margin:0 auto;width:320px;"> Test </div> </div> <style> body {margin:0px} </style> </body> In Windows Phone 8, I use the HTML as below: WebBroswer WebView = new WebBrowser(); WebView.Height = 100; WebView.Width = 400; WebView.NavigateToString(@"<html><head><meta name=""viewport"" content=""width=320, user-scalable=no, minimum-scale=1, maximum-scale=1""/></head><body style=""margin:0px;overflow:hidden;""> <div id=""im_c"" style=""height:48px;width:100%25; background: -ms-linear-gradient( bottom, #432100 30%, #00AAAA 70%);""> <div style=""margin:0 auto;width:320px;"">Test</div></div> <style> body {margin:0px} </style> </body></html>"); In this case, the CSS gradient is not visible. Am I missing something?

    Read the article

  • Instant connection to wireless network but delayed internet access on Mediacom with Windows 7

    - by David
    I have Mediacom cable internet and their provided modem/wireless router a Cisco DPC3825. Each of the laptops experiencing the trouble have Windows 7 64-bit. When connecting to the wireless network each computer will take a second or two to connect and then toggle from "no internet access" to "internet access" however, no websites are accessible for about five minutes after connecting. After that, there aren't any problems. It happens on all 3 of the laptops I have available and none of them have problems on any other network. It seems like my phone doesn't have the delay issue when it connects. I've power cycled the modem/router along with a DNS flush. I have some of the DNS servers manually set to Google DNS addresses and one just default. I've contacted and had Mediacom support try all its tricks. They changed the SSID and password along with resetting the thing remotely a handful of times. It was installed just this month and seemed to pass the tech's checks upon installation. Nothing in the settings has been changed, but it's been exhibiting this problem from the get go. This guy seems to be having the same problem, but no solution was posted. http://www.dslreports.com/forum/r27372861-IA-Connection-to-Mediacom-wireless-Modem-no-internet- Help greatly appreciated.

    Read the article

  • How can I debug user mode driver failures in Windows 8

    - by Tom
    I have a 32 GB SD Card. Whenever I insert this card in to my newly upgraded Windows 8 laptop the OS stops responding normally. Metro Apps won't work. The system may or may not log in. Desktop apps may or may not be able to do things. When I remove the card and restart then all is fine. As soon as I put the card back in, the system starts misbehaving again. I've run Windows Update, so I have the latest drivers from Microsoft. This does not occur with the 8 GB cards I have. Unfortunately I only have one 32 GB card, so I can't test with others. From examining the system event log I've determined this is happening due to a user mode driver failure. How can I best debug this issue from here? How can I figure out which driver this is related to? Will there be a Dr. Watson crash dump somewhere? Details - System - Provider [ Name] Microsoft-Windows-DriverFrameworks-UserMode [ Guid] {2E35AAEB-857F-4BEB-A418-2E6C0E54D988} EventID 10110 Version 1 Level 1 Task 64 Opcode 0 Keywords 0x2000000000000000 - TimeCreated [ SystemTime] 2012-10-29T00:51:57.532718300Z EventRecordID 40417 Correlation - Execution [ ProcessID] 1056 [ ThreadID] 3796 Channel System Computer thebrain - Security [ UserID] S-1-5-18 - UserData - UMDFHostProblem [ lifetime] {811E3DC4-FBC6-420B-ABCC-AD7505A36F3B} - Problem [ code] 3 [ detectedBy] 2 ExitCode 3 - Operation [ code] 259 Message 72448 Status 4294967295 Edit 1 So I tried using Debug View from SysInternals (you can get it here: http://technet.microsoft.com/en-us/sysinternals/bb896647.aspx). That gave me this information: which is not especially helpful. Then I tried connecting WinDbg to WUDFHost.exe (the process that seems to host user mode drivers) to see if it could catch the error. Get it here: http://msdn.microsoft.com/en-US/windows/hardware/hh852363 Instructions: http://msdn.microsoft.com/en-US/library/windows/hardware/ff554716(v=vs.85).aspx That didn't help much. It didn't catch any exceptions as I'd hoped (which would point me to the cause of the crash at least). Here's the stack of one of the threads:

    Read the article

  • Debian Squeeze Linux 9p virtfs guest mount failure

    - by Tero Kilkanen
    First some background information on the server: Host OS: Debian Linux Squeeze + qemu-kvm version 1.0+dfsg-8~bpo60+1 Guest OS: Debian Linux Squeeze I use qemu-kvm via libvirt. I have set up 9p VirtFS with the following in Guest's XML config: <filesystem type='mount' accessmode='passthrough'> <source dir='/srv/www'/> <target dir='wwwdata'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </filesystem> That is, I want to share /srv/www to the guest OS using mount tag wwwdata. When I try to mount the VirtFS share from the guest, I get an error message: root@server:~# mount -t 9p -o trans=virtio,version=9p2000.L2 wwwdata /srv/www/ mount: wwwdata: can't read superblock I also tried virtfs target dir/mount_tag www at first. I got the same error message. However, I was able to mount the VirtFS share using mount tag www1111, or www1 or similar. Some more notes on this one. dmesg doesn't show anything useful either in guest or the host. The only sign is this entry in the guest dmesg: [ 36.054936] Installing v9fs 9p2000 file system support Does anyone know how to get this working correctly? Google gives no useful information on this issue; I've tried several searches.

    Read the article

  • Setting up a dualboot by installing cloned partitions using clonezilla

    - by Nimjox
    I'm trying to setup a dual boot system where I have Windows 7 and Linux Mint. Here's the kicker both are partitions I've saved using Clonzezilla from different places and to make matters worse Linux Mint is formated as a LVM. I need both of these images specifically as windows is a corporate image that I must use and the other is a development image that took me a week to setup. I've gotten it almost all working but my issue is that I can't get clonezilla to not mess up the partition table of Windows when installing Mint or vise-vera. I can use the (-k1 option) which doens't copy the partition table but then I have a unusable partition when it clones and I'm not sure how to fix the partition table. Here's what I'm doing: Using Gparted to make partitions sda1 40GB ntfs (windows), sda2 extended 70GB, sda5 lvm2 pv 69.99 GB (Linux), sda3 500MB (GRUB) Clonezilla windows image into sda1 partition (keeping partition table) Clonezilla linux image into sda5 partition (not recreating partition table) After all that I can boot into windows using the default MBR. I can use rescue-repair cd to reinstall GRUB which will see Windows 7 but I can't get it to see the Linux OS. I'm thinking its because of the sda5 partition but I'm not sure any ideas on what I could do to get this working or where I might be going wrong. If there is any additional detail you need please let me know and I'll edit as this is a lot.

    Read the article

  • fatal error 'stdio.h' Python 2.7 on Mc OS X 10.7.5 [closed]

    - by DjangoRocks
    I have this weird issue on my Mac OS X 10.7.5 /Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/Python.h:33:10: fatal error: 'stdio.h' file not found What caused the above error? This error has been bugging me and i can't install mysql-python as i'm stuck with this step. I'm using Python 2.7.3. Things like Google App Engine ( python ), python script, tornado generally works on my mac. But not mysql-python. I've install MySQL using the dmg image and have copied the mysql folder to /usr/local/ How do i fix this? ======UPDATE====== I've ran the command, and tried to install mysql-python by running sudo python setup.py install. But received the following: running install running bdist_egg running egg_info writing MySQL_python.egg-info/PKG-INFO writing top-level names to MySQL_python.egg-info/top_level.txt writing dependency_links to MySQL_python.egg-info/dependency_links.txt writing MySQL_python.egg-info/PKG-INFO writing top-level names to MySQL_python.egg-info/top_level.txt writing dependency_links to MySQL_python.egg-info/dependency_links.txt reading manifest file 'MySQL_python.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' writing manifest file 'MySQL_python.egg-info/SOURCES.txt' installing library code to build/bdist.macosx-10.6-intel/egg running install_lib running build_py copying MySQLdb/release.py -> build/lib.macosx-10.6-intel-2.7/MySQLdb running build_ext gcc-4.2 not found, using clang instead building '_mysql' extension clang -fno-strict-aliasing -fno-common -dynamic -g -O2 -DNDEBUG -g -O3 -Dversion_info=(1,2,4,'rc',5) -D__version__=1.2.4c1 -I/usr/local/mysql/include -I/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c _mysql.c -o build/temp.macosx-10.6-intel-2.7/_mysql.o -Os -g -fno-common -fno-strict-aliasing -arch x86_64 In file included from _mysql.c:29: /Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/Python.h:33:10: fatal error: 'stdio.h' file not found #include <stdio.h> ^ 1 error generated. error: command 'clang' failed with exit status 1 What other possible ways can i fix it? thanks! Best Regards.

    Read the article

  • scanner no longer working windows 7

    - by Sackling
    I have a windows 7 64 pro machine that for some reason no longer works with a brother all in one printer/scanner, when I try to scan. This happened seemingly out of nowhere. Printing still works. When I try to scan from photoshop, photoshop crashes. When I try to scan from device manager (which takes a really long time to access the printer) I get an error saying: "operation could not be completed. (error 0x00000015). the device is not ready. I tried uninstalling/reinstalling the printer and drivers. I have also run the brother driver cleaner and then reinstalled and get the same results every time. I had access to another HP all in one. And very strangely When I setup this new printer I have a similar issue that happens. I am able to print but when I try to scan from the HP tool nothing pops up. So my thoughts are it is something to do with the twain driver? But I don't know what to do or check. Any help is appreciated.

    Read the article

  • Using native resolution on external display results in stretched, out of bounds image

    - by Roni Yaniv
    I have an HP min 311 netbook with Windows XP, which I've connected to a Samsung SyncMaster 2043BW display via the supplied analog cable. The external display's native res is 1680x1050, which the netbook's ION GPU supports. I've configured the external display as the single display (no cloning or any such fancy stuff). However, once I set the native res, the image just stretches out. It looks squashed, and it goes outside the monitor's edges. In contrast, lower resolutions manage to stay within the monitor's display edges, though obviously they are skewed in some way (vertically or horizontally). BTW, the only res which seems to be displayed relatively clearly (it's the least blurry) is 1280x720. I tried looking all over the web for an explanation/advice but could not find any. I already played with the settings on the external display itself several times. So either it's not that, or I missed something. Has someone run into this issue? I need help.

    Read the article

  • locked files on HFS+ home partition shared between OSX/Linux

    - by HazyBlueDot
    I dual boot into Arch Linux and OS X 10.6 on my MacBook pro. I synced my UID between both OSes and created an HFS partition (with no journaling) to use as a shared home/Users partition. For the most part it works just as I'd expect, but sometimes when I'm booted into OS X certain files are "locked" (when I get info on a particular file the "Locked" box is checked under the "General" pane. I can resolve the issue by manually unchecking the box) and/or I get "Operation not permitted" when I try deleting or chmod'ing a file. In both cases I don't see anything out of the ordinary on the permission bits displayed with ls -l, except for a trailing '@' character in the position where the sticky bit would normally occur: -rw-r--r--@ 1 myuser mygroup 296 Mar 29 11:44 myfile This '@' character shows up on ALL normal files, so doesn't seem to be linked to the locked/operation not permission situation. On the Linux side of things I never have permission problems. To the best of my limited knowledge and experience with ACLs I've not found any ACLs on any of the files in question. For what it's worth, I do most of my file editing using emacs (Aquamacs in OSX), is it possible it is setting weird permission bits? What is the "locked" setting that OS X uses and does it have a permission bit equivalent (so at the very least I could recursively unlock all files in my home directory from the terminal) why might some, but not other files get "locked" when booting into OS X what is the meaning of the '@' character?

    Read the article

  • Windows service runs file locally but not on server

    - by Ben
    I created a simple Windows service in dot net which runs a file. When I run the service locally I see the file running in the task manager just fine. However, when I run the service on the server it won't run the file. I've checked the path to the file which is fine. I also checked the permissions on the folder and file, and they fine as well. Also there are no exceptions happening. Below is the code used to launch the process which runs the file. I posted this first on stack overflow, and some people were thinking this is a config issue, so I moved it here. Any ideas? try { // TODO: Add code here to start your service. eventLog1.WriteEntry("VirtualCameraService started"); // Create An instance of the Process class responsible for starting the newly process. System.Diagnostics.Process process1 = new System.Diagnostics.Process(); // Set the directory where the file resides process1.StartInfo.WorkingDirectory = "C:\\VirtualCameraServiceSetup\\"; // Set the filename name of the file to be opened process1.StartInfo.FileName = "VirtualCameraServiceProject.avc"; // Start the process process1.Start(); } catch (Exception ex) { eventLog1.WriteEntry("VirtualCameraService exception - " + ex.InnerException); }

    Read the article

  • Apache Proxy Pass and Web Sockets

    - by James
    I'm using Apache with the mod_proxy module to reverse proxy my Node.js application through to port 80, so that we can access it as an internal application. I have a file in sites-enabled which contains this: VirtualHost *:80> DocumentRoot /var/www/internal/ ServerName internal ServerAlias internal <Directory /var/www/internal/public/> Options All AllowOverride All Order allow,deny Allow from all </Directory> ProxyRequests off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass / http://localhost:8080/ retry=0 ProxyPassReverse / http://localhost:8080/ ProxyPreserveHost on ProxyTimeout 1200 LogLevel debug AllowEncodedSlashes on </VirtualHost> As I said, our application is written in Node.js and we're using socket.io to make use of web-sockets, as our application also contains realtime elements to it. The problem is, mod_proxy doesn't seem to handle web sockets and we get errors when trying to use them: WebSocket connection to 'ws://bloot/socket.io/1/websocket/nHtTh6ZwQjSXlmI7UMua' failed: Unexpected response code: 502 How can we fix this issue and keep sockets working, as the only way we can get it working currently is to access the site via ip:port which we don't want to do. Also, as a side question, how can I get ErrorDocument to work properly? Our error files are stored in /var/www/internal/public/error/ but they seem to get put through the proxy too?

    Read the article

  • How can I have puppet deploy ssh keys for virtual users?

    - by Pheezy
    I am trying to get puppet to assign authorized ssh keys for virtual users but I keep getting the following error: err: Could not retrieve catalog: Could not parse for environment production: Syntax error at 'user'; expected '}' at /etc/puppet/modules/users/manifests/ssh_authorized_keys.pp:9 I believe my configuration are correct (listed below) but is there a syntax error or scoping issue I am missing? I would simply like to assign users to nodes and have those users automagically have their ssh keys installed. Is there maybe a better way to do this and I'm just overthinking it? # /etc/puppet/modules/users/virtual.pp class user::virtual { @user { "user": home => "/home/user", ensure => "present", groups => ["root","wheel"], uid => "8001", password => "SCRAMBLED", comment => "User", shell => "/bin/bash", managehome => "true", } # /etc/puppet/modules/users/manifests/ssh_authorized_keys.pp ssh_authorized_key { "user": ensure => "present", type => "ssh-dss", key => "AAAAB....", user => "user", } # /etc/puppet/modules/users/init.pp import "users.pp" import "ssh_authorized_keys.pp" class user::ops inherits user::virtual { realize( User["user"], ) } # /etc/puppet/manifests/modules.pp import "sudo" import "users" # /etc/puppet/manifests/nodes.pp node basenode { include sudo } node 'testbox' inherits basenode { include user::ops } # /etc/puppet/manifests/site.pp import "modules" import "nodes" # The filebucket option allows for file backups to the server filebucket { main: server => 'puppet' } # Set global defaults - including backing up all files to the main filebucket and adds a global path File { backup => main } Exec { path => "/usr/bin:/usr/sbin/:/bin:/sbin" }

    Read the article

  • How do you handle data archiving?

    - by 20th Century Boy
    Backups are one thing, but long term archival is another. For example, you might be required to store emails for 7 years, or keep all project data indefinitely. I used to save archives to tape, but then I've had tapes get destroyed (drives rip the tape out). So...write to 2 tapes I hear you say. Is that what others do? Have 2 (or more) tapes of the same data for redundancy? But then the other issue is that tapes cannot usually be read by different backup software vendors. Eg if you go from Arcserve - Backup Exec - Commvault over 10 years you would need to keep all 3 systems so that you could restore old data. Likewise for hardware. Old tapes might not be barcoded. Might not be compatible with the new library etc etc. So do you keep old tape hardware AND old software just in case you might need to restore a 10 year-old file? Or...when you move to a new backup system do you migrate all archived data to the new system and re-archive it onto new tapes? That could be a huge job. Any thoughts?

    Read the article

  • DIR $file "File Not Found" vs DIR $filedir shows it....not permissions, not USB

    - by Kev
    I was having this problem before on a USB drive, but now it's happening on my main RAID5-backed hard disk: 2013-10-17 9:37 C:\>dir "C:\Shares\Shared\Reference\Safety Management System\Vid eo CD\AutoPlay\Docs\Manuel*" Volume in drive C has no label. Volume Serial Number is 3C18-E114 Directory of C:\Shares\Shared\Reference\Safety Management System\Video CD\AutoP lay\Docs 2003-09-09 11:29 PM 1,056,768 Manuel d'intervention d'urgence MFC.doc 2004-06-20 10:36 PM 139,849 Manuel d'intervention d'urgence MFC.pdf 2 File(s) 1,196,617 bytes 0 Dir(s) 196,068,691,968 bytes free 2013-10-17 9:38 C:\>dir "C:\Shares\Shared\Reference\Safety Management System\Vid eo CD\AutoPlay\Docs\Manuel d'intervention d'urgence MFC.doc" Volume in drive C has no label. Volume Serial Number is 3C18-E114 Directory of C:\Shares\Shared\Reference\Safety Management System\Video CD\AutoP lay\Docs File Not Found 2013-10-17 9:38 C:\> This is from a Command Prompt window where I went to Properties and told it I wanted to modify who it ran as. I opened it, had it run as me with the "restricted access" unchecked, then ran the above. The file in question has the following ACLs: Administrators, SYSTEM, and OurCompanyUsers. All three have full control of everything. Nobody has any Deny bits set. I am a member of Administrators. So I don't believe it's a permissions issue. It's not a USB drive, so this time there is no question of USB hardware. Windows Server 2003 Standard Edition SP2. What does this mean? Is this more likely a hardware or software problem?

    Read the article

  • Ubuntu 10.04 server delay responding to AJAX requests

    - by DanielAttard
    I manage a Ubuntu 10.04 server with a couple of domains hosted on it. As I continue to learn more about all these wonderful new (for me), one issue that I have begun to notice is the delay it sometimes takes for the server to respond to certain requests. As an example, when I view the timeline of events using firebug I can see that most of the time when I make a POST, the server responds in under 100ms. Sometimes, however, there is a substantial delay before the RESPONSE from the server. I can't seem to tell when the delay will happen and when it won't, however, when it happens the delay is always for about 4.5 seconds. The delay seems to happen about 30-40% of the time. Here is the section of apache2.conf dealing with logs: # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # If you are behind a reverse proxy, you might want to change %h into %{X-Forwarded-For}i # LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent I have no idea where to look to try and debug this problem or investigate further. Any suggestions?

    Read the article

  • sendmail Name server timeout

    - by broody
    Complete sendmail newbie here... I've been trying to get mailing to work in PHP and I've root caused it down to sendmail's complaint about "Name server timeout": >sendmail -t -v >From: [email protected] >To: [email protected] >. gmail.com: Name server timeout [email protected]... Transient parse error -- message queued for future delivery [email protected]... queued So it sounds like a DNS issue? But I can do a "dig mx gmail.com" and it will query successfully. Here's what confuses me... I can get sendmail to work two other ways. The first way is through telnet: >telnet 127.0.0.1 25 >Helo me >Mail from: [email protected] >Rcpt to: [email protected] >. message sent And the second way is by explicitly appending the sendmail.cf, but this is strange because it's the exact same file I use to configure sendmail to begin with: >sendmail -t -v -C/etc/mail/sendmail.cf But none of these solutions will resolve my PHP mailing... I am clueless as to what is going on... appreciate any help.

    Read the article

  • PsExec and Remote Environment Variables, Logging, Etc.

    - by alharaka
    When I run PsExec on a remote computer, I always fall short of what I want. What I would like ideally in most situations is a) a log on an admin server where each individual log has the name of each the remote computer it was generated from (e.g. COMPNAME1.log, COMPNAME2.log, etc.) or b) a log file on each remote computer with whatever name I specify. When I try scenario (a), I use the following command. %SystemDrive%\path\to\psexec.exe @listofcomputers.txt -u DOMAIN\username cmd /c echo TEST >> \\server.company.tld\share\%computername%.log Problem is that it never works. All the computers just write to the log where %computername% is just the computer I execute PsExec from in my office. What I want are unique logs for each computer specific in the listofcomputers.txt that will correctly use the hostname from the remote environment variable without issue. Is that even possible? It does not seem to work for me. I tried this, and the syntax is clearly wrong. %SystemDrive%\path\to\psexec.exe @listofcomputers.txt -u DOMAIN\username "cmd /c echo TEST >> \\server.company.tld\share\%computername%.log" PsExec just fails saying the system file cannot be found (read: syntax fail). As for scenario (b), it appears to be a variation of a similar problem. When I run a command like this, it does not work. %SystemDrive%\path\to\psexec.exe @listofcomputers.txt -u DOMAIN\username "cmd /c echo %computername% >> \\server.company.tld\share\aggregated.log" Is there something I do not understand about remote path and environment variables with PsExec on the cmd.exe console (I have not even tried the dreaded PowerShell yet). I know such things work in a batch file (cmd /c \\server.company.tld\share\runthis.bat), but is there a reason it will not work when executing commands as arguments? I always need this, and can never get it!

    Read the article

  • Sporadic crash of master-slave MySQL replication process

    - by obarshay
    Hello, I was wondering if someone has experienced this and can perhaps provide some insight into this issue. We have a plan-vanilla MySQL master-slave replication set up. The tables are MyISAM and the master can get quite read/write active. We use the slave instance to perform full daily backups in order to avoid bringing down the master server. The backup process does the following: STOP SLAVE SQL_THREAD mysqlhotcopy all tables START SLAVE SQL_THREAD Every once in a while (once a month or so) the replication breaks with varying error messages indicating a corrupt query or log file. Here's one that happened last night: mysql> show slave status \G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: server8.propreports.com Master_User: nexus8 Master_Port: 3306 Connect_Retry: 60 Master_Log_File: bin.000045 Read_Master_Log_Pos: 581644327 Relay_Log_File: relay.000086 Relay_Log_Pos: 94131 Relay_Master_Log_File: bin.000045 Slave_IO_Running: Yes Slave_SQL_Running: No Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 1064 Last_Error: Error 'You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '138070603'£' at line 1' on query. Default database: 'wtsdb'. Query: 'UPDATE fill SET clearing_fee='0.0E id='138070603'£' Skip_Counter: 0 Exec_Master_Log_Pos: 4164743 Relay_Log_Space: 577574251 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULL I follow the following procedure to recover from above error and resume replication: stop slave; change master to MASTER_LOG_POS = 4164743, MASTER_LOG_FILE = 'bin.000045'; start slave; We have multiple servers set up this way and they all sporadically stop replicating with a similar error. Any advice on how to resolve this would be greatly appreciated.

    Read the article

  • RemoteApp .rdp embed creds?

    - by Chris_K
    Windows 2008 R2 server running Remote Desktop Services (what we used to call Terminal Services back in the olden days). This server is the entry point into a hosted application -- you could call it Software as a Service I suppose. We have 3rd party clients connecting to use it. Using RemoteApp Manager to build RemoteApp .rdp shortcuts to distribute to client workstations. These workstations are not in the same domain as the RDS server. There is no trust relationship between domains (nor will there be). There is a tightly controlled site to site VPN between workstations and the RDS server, we're quite confident we have access to the server locked down. The remoteApp being run is an ERP application with its own authentication scheme. The issue? I'm trying to avoid the need to create AD logins for every end user when connecting to the RemoteApp server. In fact, since we're doing a remoteApp and they have to authenticate to that app, I'd rather just not prompt them at all for AD creds. I certainly don't want them caught up in managing AD passwords (and periodic expirations) for accounts they only use to get to their ERP login. However, I can't figure out how to embed AD creds in a RemoteApp .rdp file. I don't really want to turn off all authentication on the RDS server at that level. Any good options? My goal is to make this as seamless as possible for the end-users. Clarifying questions are welcome.

    Read the article

  • IE 8 crashes when opening an html file from an autorun CD

    - by dave4351
    On Windows 7, when I insert a CD with a very simple HTML page referenced in the autorun.inf file, Internet Explorer 8 opens and then immediately crashes, with the error message "Internet Explorer has stopped working" and then checks for a solution. Then it says "A problem caused the program to stop working correctly. Windows will close the program and notify you if a solution is available." On Windows Vista, IE 8 just opens and then closes silently, with no error messages or help of any kind. I have since discovered this is due to IE 8's "protected mode" and turning this off fixes the issue so that the HTML file on the CD launches normally. What I'd like to know: is there any way to get around this, so that I can launch a simple HTML file in the user's default browser, using an autorun.inf file on a CD, without first having my users open IE 8 and turn off protected mode? Perhaps the most bizarre thing is that if IE is already open when you insert the CD, the HTML page just opens--no crash. It only crashes if IE 8 happens to be closed when you insert the CD. And so a possible workaround may be to get the default browser to launch and then open the HTML file. All suggestions welcome.

    Read the article

  • Networking Problem MrxSmb event 50 "Delayed Write Failed" errors occurring all of the sudden

    - by Johnny Musso
    JUST THIS MONTH, we have started getting reports from a number of very stable clients that MrxSmb event id 50 errors keep appearing in their system event logs. Otherwise, they do not appear to have any networking problems except that there is a critical legacy application which seems to either be generating the MrxSmb errors or having errors occur because of them. The legacy application is comprised of 16 bit and 32 bit code and has not been changed or recompiled in many years. It has always been stable on Windows XP systems. The customers that have the problem usually have a small (5 clients or less) peer to peer network with all Windows XP systems. All service packs are loaded on the XP machines. Note: The only thing that seems to correct the problem is disabling opportunistic locking. I don't like this solution because it seems to slow down the network and sometimes causes record locking issues between users (on some networks). Also, this seems to have just started happening - as if a Windows update for XP has caused it? However, I have removed recent updates and it did not correct the issue. Thanks in advance for any help you can provide.

    Read the article

  • Redundant OpenVPN connections with advanced Linux routing over an unreliable network

    - by konrad
    I am currently living in a country that blocks many websites and has unreliable network connections to the outside world. I have two OpenVPN endpoints (say: vpn1 and vpn2) on Linux servers that I use to circumvent the firewall. I have full access to these servers. This works quite well, except for the high package loss on my VPN connections. This packet loss varies between 1% and 30% depending on time and seems to have a low correlation, most of the time it seems random. I am thinking about setting up a home router (also on Linux) that maintains OpenVPN connections to both endpoints and sends all packets twice, to both endpoints. vpn2 would send all packets from home to vpn1. Return trafic would be send both directly from vpn1 to home, and also through vpn2. +------------+ | home | +------------+ | | | OpenVPN | | links | | | ~~~~~~~~~~~~~~~~~~ unreliable connection | | +----------+ +----------+ | vpn1 |---| vpn2 | +----------+ +----------+ | +------------+ | HTTP proxy | +------------+ | (internet) For clarity: all packets between home and the HTTP proxy will be duplicated and sent over different paths, to increase the chances one of them will arrive. If both arrive, the first second one can be silently discarded. Bandwidth usage is not an issue, both on the home side and endpoint side. vpn1 and vpn2 are close to each other (3ms ping) and have a reliable connection. Any pointers on how this could be achieved using the advanced routing policies available in Linux?

    Read the article

  • Apache2 process stuck at 100% cpu, CLOSE_WAIT socket lingering

    - by mmazing
    I've troubleshooted the heck out of this today, and I can't seem to find any information on how to determine what is happening exactly. Basically, on my development server, another developer is causing CLOSE_WAIT connections that eat up one or more apache2 processes for several hours if I don't restart apache2. strace on any of the processes yields no information, only that it was able to attach. mod_proxy is not enabled. KeepAlive is on, KeepAliveTimeout is 15 seconds, MaxKeepAliveRequests is 100. From what I've been reading, this may or may not be an apache issue at all, just that that's how CLOSE_WAIT works (the server is waiting for a FIN packet to close the connection). I just can't believe that a server would be crippled so easily by not receiving a packet from a remote host telling it to close the connection. Especially without any intervention for well over an hour. Any tips? I'm about to pull my hair out. Edit : Also, there are no unusual entries in any apache log files. Edit 2: lsof -i shows only a single CLOSE_WAIT per hanging process. (That's what has been bothering me about this, as most other discussions talk about many CLOSE_WAIT connections, while I only have one per process.) The nature of the code that is running (php) doesn't really lend itself to closing open connections and whatnot. I can run the same code that he is executing with the same session data, and not result in a hanging process.

    Read the article

  • 32 core (each physical core) 2.2 GhZ or 12 core (6 physical cores) 3.0GHZ?

    - by Tejaswi Rana
    I am working on a multithreaded application (Forex trading app built on C#) and had the client upgrade from the 12 core 3.0GHZ machine (Intel) to a 32 core 2.2 Ghz machine (AMD). The PassMark benchmark results were significantly higher when using multicores doing Integer, Floating and other calculations while for a single core calculation it was a bit slower than the pack (others that were being compared to with similar config as the 12 core one). Oh it also comes with 64 GB RAM (4 times as the other one) and a much faster SSD. So after configuring and running the application on that machine, not only did it not perform as well, it was significantly slower. We're talking about 30seconds - 1 minute slower on an app that usually completes processing within 5-20 secs. The application uses MAX DEGREE of PARALLELISM (TPL) which I've tried setting to number of cores and also half of that. I've also tried running single threaded and without setting any limits in parallel threading. While it may be the hardware has some issues, I am wondering if the CPU processing speed is the issue. I can overclock to 3.0 GHZ. But is that even a good idea? Server Info - AMD http://www.passmark.com/forum/showthread.php?4013-AMD-Dual-6272-performance-is-60-lower-than-benchmarks Seems that benchmark was wrong to start with - officially. Intel i7 3930k OS (same in both) Windows 7 Professional 64-bit

    Read the article

  • samba4 not building in archlinux.

    - by kmplsv
    cp bin/tdbtool bin/tdbdump bin/tdbbackup /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/bin cp ./include/tdb.h /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/include cp tdb.pc /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/pkgconfig cp libtdb.a libtdb.so.1.2.4 /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib rm -f /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/libtdb.so ln -s libtdb.so.1.2.4 /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/libtdb.so rm -f /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/libtdb.so.1 ln -s libtdb.so.1.2.4 /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/libtdb.so.1 mkdir -p /tmp/yaourt-tmp-root/aur-samba4/pkg/`/tmp/yaourt-tmp-root/aur-samba4/src/bin/python -c "import distutils.sysconfig; print distutils.sysconfig.get_python_lib(1, prefix='/opt/samba4/samba')"` cp tdb.so /tmp/yaourt-tmp-root/aur-samba4/pkg/`/tmp/yaourt-tmp-root/aur-samba4/src/bin/python -c "import distutils.sysconfig; print distutils.sysconfig.get_python_lib(1, prefix='/opt/samba4/samba')"` /bin/install -c -d /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/share/man/man8 for I in manpages/*.8; do \ /bin/install -c -m 644 $I /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/share/man/man8; \ done /bin/install: cannot stat `manpages/*.8': No such file or directory make: *** [installdocs] Error 1 Aborting... ==> ERROR: Makepkg was unable to build samba4. ==> Restart building samba4 ? [y/N] ==> ------------------------------- ==>c any ideas as what is causing my build to fail? i'm assuming it's an issue with manpages but i can't figure out exactly what package it is looking for that i don't have.

    Read the article

< Previous Page | 691 692 693 694 695 696 697 698 699 700 701 702  | Next Page >