Search Results

Search found 2886 results on 116 pages for 'behaviour'.

Page 26/116 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • How can I change the "timeout" duration for Nautilus "find the filename as you type" feature?

    - by fred.bear
    I often get stalled by the long timeout while typeing the first few letters of a file name in Nautilus... The current timeout seems to be 5 seconds. I'd prefer 1 second ...(as per item 2 on this page about Response Times) I don't use the mouse much, which means I either wait, or press Escape, when I don't find the file... I realize that this is a feature to some, but I'd rather not wait. Is there any way to change this timeout behaviour?

    Read the article

  • /usr/share/zoneinfo when does the system time actually change?

    - by Steven Brown
    Please correct me if I am wrong here.... When I change my system clock this changes the file /usr/share/zoneinfo immediately, HOWEVER, the actual system time doesn't change until the next reboot because /etc/localtime then re-reads /usr/share/zoneinfo? I have seen behaviour similar to the above, whereby /usr/share/zoneinfo/ was accessed but the system time did not change until after the system had rebooted

    Read the article

  • How to change the binding of Windows key which runs Unity's Dash?

    - by LFC_fan
    Currently, I am using the Unity Qt panel in my Gnome desktop, and when I press the Windows key, the Unity's dash launches, and I can't use any compiz based shortcuts. Same behaviour is exhibited when I log in to Unity 2D as well, as the Windows key launches the dash. I have no desire to change my Compiz shortcuts, so is there any way to change the keyboard mapping of Unity 2D's dash to something other or disable this shortcut completely?

    Read the article

  • RAID1: can't replace faulty spare (marked again as 'faulty spare' within seconds)

    - by user212475
    I got a problem that I cannot solve: Our fileserver runs XUbuntu and 3 RAID1s. One has a problem since monday: it consists of sdb and sdc. sdb was marked as faulty by mdadm for unknown reasons. I used --remove to remove it from the RAID and then to add it by --add. All was fine, re-syncing started but never got above 0% and after a few seconds, sdb was again marked as 'faulty spare' (and therefore the RAID degraded, but clean). So I saved the first 512 byte of the old sdb to a file, bought a new HDD of same size (4TB), shut down the computer and replaced sdb physically, switched the computer back on and wrote the 512 byte back to the new drive to have the same partition info as the old drive (both are the same type, from same company). But the new drive shows the same behaviour as the old: I can add, re-syncing starts and after a few seconds its marked as 'faulty spare'. Here exactly what i did: mdadm --remove /dev/md/1 /dev/sdb maadm --detail /dev/md/1 gives me: /dev/md/1: Version : 1.2 Creation Time : Sat Jun 8 22:32:05 2013 Raid Level : raid1 Array Size : 3906887360 (3725.90 GiB 4000.65 GB) Used Dev Size : 3906887360 (3725.90 GiB 4000.65 GB) Raid Devices : 2 Total Devices : 1 Persistence : Superblock is persistent Update Time : Thu Nov 7 06:56:13 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Name : File-Server:1 (local to host File-Server) UUID : 44ed561f:b733e946:e69820f4:aba9b223 Events : 2424 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 32 1 active sync /dev/sdc mdadm --add /dev/md/1 /dev/sdb mdadm --detail /dev/md/1 gives me: Version : 1.2 Creation Time : Sat Jun 8 22:32:05 2013 Raid Level : raid1 Array Size : 3906887360 (3725.90 GiB 4000.65 GB) Used Dev Size : 3906887360 (3725.90 GiB 4000.65 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Thu Nov 7 06:57:49 2013 State : clean, degraded, recovering Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Rebuild Status : 0% complete Name : File-Server:1 (local to host File-Server) UUID : 44ed561f:b733e946:e69820f4:aba9b223 Events : 2431 Number Major Minor RaidDevice State 2 8 16 0 faulty spare rebuilding /dev/sdb 1 8 32 1 active sync /dev/sdc and after a few seconds: /dev/md/1: Version : 1.2 Creation Time : Sat Jun 8 22:32:05 2013 Raid Level : raid1 Array Size : 3906887360 (3725.90 GiB 4000.65 GB) Used Dev Size : 3906887360 (3725.90 GiB 4000.65 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Thu Nov 7 06:57:50 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Name : File-Server:1 (local to host File-Server) UUID : 44ed561f:b733e946:e69820f4:aba9b223 Events : 2436 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 32 1 active sync /dev/sdc 2 8 16 - faulty spare /dev/sdb same behaviour if I zero the superblock (mdadm --zero-superblock /dev/sdb) before adding sdb. I do all commands as root and the system holds 3 more 4TB drives, ie the mainboard can handle them. The old harddrive was checked for errors using badblocks, but all is fine. Does anybody have any idea, what the problem is?

    Read the article

  • Xubuntu keyboard doesn't work properly

    - by OmPS
    I have installed xubuntu 12.04 but after installation my buttons "d" and "e" doesn't respond properly. When I press "d" on the keypas, it enters automatically giving error as d:command not found, same happenes with Caps lock is on. While presssing "e" i get e+ or +e. Don't know why this weired behaviour, i tired reinstalling xubuntu, but the same think happens again. My keyboard is English(US). Regards,

    Read the article

  • Are UML class diagrams adequate to design javascript systems?

    - by Vandell
    Given that UML is oriented towards a more classic approach to object orientation, is it still usable in a reliable way to design javascript systems? One specific problem that I can see is that class diagrams are, in fact, a structural view of the system, and javascript is more behaviour driven, how can you deal with it? Please, keep in mind that I'm not talking abot the real world domain here, It's a model for the solution that I'm trying to achieve.

    Read the article

  • How can I stop the panel from showing on an external projector/monitor?

    - by hellocatfood
    In Ubuntu 10.10 (and possibly previous versions), when I connected my laptop to an external source (monitor/projector) and set it not to clone my desktop the display on the external source would not show the (gnome-)panel (just a full-screen desktop with no icons). Since upgrading to Ubuntu 11.04 (using Unity) when I now connect the external source has the panel on there. Is there any way to revert to the previous behaviour in 11.04?

    Read the article

  • How to disable automatic download of new podcast episodes in Rhythmbox?

    - by meceso
    After adding a new podcast to Rhythmbox it starts downloading the newest episode automatically without asking the user for permission. I do find this behaviour pretty annoying especially when you have a lot of podcasts and are using 3G broadband to access the internet. Can you update the podcast feed and then choose manually which episode you want to download? Did not find this in the settings/preferences...

    Read the article

  • Print screen key can no longer select a region to capture

    - by Steven Sproat
    After upgrading from 11.04 to 11.10, I've noticed that the "Take Screenshot" application used to be launched when I'd hit print screen. Now it automatically captures the entire screen. In 11.04 it would pop up the "take screenshot" application which allows me to capture a rectangular region of the screen. Can this be reverted? I never want to capture the whole screen, only portions of it, which means manually cropping out that section in GIMP with this changed behaviour. Cheers.

    Read the article

  • How do you completely reset monitor configuration (e.g. what used to be dpkg-reconfigure xserver-xorg)?

    - by John
    Using unity 11.04. I have a secondary monitor (actually two, one at home, one at work). The home twin monitor set up works perfect (Viewsonic monitor). At work, I get very buggy behaviour, such as full screen applications 'ghosting' when maximized, and other strange effects. I would like to try and completely reset the monitor configurations (not unity or compiz), before doing anything else. In 10.10 this would be accomplished: dpkg-reconfigure xserver-xorg

    Read the article

  • The need to reduce mesh count

    - by OJW
    In Panda3d, I load a model and place 10000 references to it in the scene-graph. It runs at (say) 2Hz. I load a 3d model containing 10000 copies of that exact same object, and it runs at (say) 60Hz. As does using the flattenStrong() command which is effectively the same thing but at runtime. So the question is: is this behaviour a peculiarity of Panda3d, or is it a fundamental law which applies to all games engines?

    Read the article

  • How come the ls command prints in multiple columns on tty but only one column everywhere else?

    - by David Lou
    Even after using Unix-like OSes for a couple years, this behaviour still baffles me. When I use the ls command in a directory that has lots of files, the output is usually nicely formatted into multiple columns. Here's an example: $ ls a.txt C.txt f.txt H.txt k.txt M.txt p.txt R.txt u.txt W.txt z.txt A.txt d.txt F.txt i.txt K.txt n.txt P.txt s.txt U.txt x.txt Z.txt b.txt D.txt g.txt I.txt l.txt N.txt q.txt S.txt v.txt X.txt B.txt e.txt G.txt j.txt L.txt o.txt Q.txt t.txt V.txt y.txt c.txt E.txt h.txt J.txt m.txt O.txt r.txt T.txt w.txt Y.txt However, if I try to redirect the output to a file, or pipe it to another command, only a single column appears in the output. Using the same example directory as above, here's what I get when I pipe ls to wc: $ ls | wc 52 52 312 In other words, wc thinks there are 52 lines, even though the output to the terminal has only 5. I haven't observed this behaviour in any other command. Would you like to explain this to me?

    Read the article

  • Change the number of consecutive frequent ssh login before temporary blocking the user login

    - by Kenneth
    my server currently would temporarily refuse a user to login for certain amount of time (maybe ~20min) if the user consecutively frequent ssh login for 3 times. Can I change this behaviour (say relaxed the definition of frequent maybe from 'within 5 sec' to 'within 10 sec'; or increase the # of consecutive login from 3 to 5)? Thanks. Added: Ah.. now I think the problem was not with the ssh. I just tried on another newly installed server. consecutive successful login won't block the user. I have no sudo permission on the server I mentioned above. Now I suspect this behaviour may cause by the firewall in the system. Thanks everyone's comments. ADDED 2: Ah... after some searches. I think the server is using /sbin/iptables to do it as I can see the iptables program is there even though I don't have permission to list the rules. Thanks everyone, special thank to jaume and Mark!

    Read the article

  • Why are SMART error rates going down?

    - by Jeff Shattock
    I have a hard drive that's part of a Linux software raid5 array. SMART has reported that its multi_zone_error_rate was 0, then 1, then 3. So I figured I better start backing up more frequently and prepare to replace the drive. Now, today, the multi_zone_error_rate of that very same drive is back down to 1. It seems that 2 errors unhappened while I wasn't looking. I've also seen simliar behaviour by inspecting the syslog on the server. Jun 7 21:01:17 FS1 smartd[25593]: Device: /dev/sdc, SMART Usage Attribute: 7 Seek_Error_Rate changed from 200 to 100 Jun 7 21:01:17 FS1 smartd[25593]: Device: /dev/sde, SMART Usage Attribute: 7 Seek_Error_Rate changed from 200 to 100 Jun 7 21:01:18 FS1 smartd[25593]: Device: /dev/sdg, SMART Usage Attribute: 7 Seek_Error_Rate changed from 200 to 100 Jun 8 02:31:18 FS1 smartd[25593]: Device: /dev/sdg, SMART Usage Attribute: 7 Seek_Error_Rate changed from 100 to 200 Jun 8 03:01:17 FS1 smartd[25593]: Device: /dev/sdc, SMART Usage Attribute: 7 Seek_Error_Rate changed from 100 to 200 Jun 8 03:01:17 FS1 smartd[25593]: Device: /dev/sde, SMART Usage Attribute: 7 Seek_Error_Rate changed from 100 to 200 These are raw values, not the human-useful values that smartctl -a produces, but the behaviour is similar: error rates changing, then undoing the change. None of these are the drive that had the multi_zone weirdness. I haven't seen any problems from the RAID; its most recent scrub ( < 24 hours ago) came back totally clean. The only thing I can think of is that the SMART reporting circuitry on the drive isn't working properly all the time. The cables are in tight on the drive and board. What's going on here?

    Read the article

  • USB External HDD NOT spinning down on Windows Vista / Windows 7

    - by Deepak
    I have 3 external 2.5" USB HDDs - all from different manufacturers and with different capacities. I also have access to multiple Windows Vista / Windows 7 / Windows XP computers. My problem is that with the Windows Vista and Windows 7 computers, the external USB drives DO NOT spin down when I do "Safely remove hardware". Windows will tell me that I can safely remove the device, but I can see (and feel the rotations of the disk when I touch the casing) that the disks are still spinning and NEVER spin down. They also never go into their suspended state (which is generally signaled with a slow flashing of the activity LED). However, with Windows XP, when I do "Safely remove hardware", I can see that the drives do indeed spin down without any issues and go into their respective suspended states. I notice that this behaviour is consistent across all my 3 drives and on different hardware. Has anybody else noticed the same issues? Is there any way we can have the same behaviour as Windows XP on Windows Vista and 7, because I feel on the long run, disconnecting the drives while they are still spinning will have a negative effect on their life span. Thanks, Deepak.

    Read the article

  • Mac OS X Lion (10.7) Drive Encryption

    - by Skoota
    My iMac has two drives (a 256 GB solid-state drive, and regular 2 TB hard drive). The Mac OS X Lion system is installed on the solid-state drive and, like many other users, I have moved my user profile folder onto the secondary 2 TB drive. However, as you may be aware, FileVault 2 on Mac OS X Lion (10.7) only encrypts the system drive. This leaves my data drive (containing my user profile folder, with all of my data) unencrypted. I am aware that work arounds for this issue exist (such as https://github.com/jridgewell/Unlock) but I am not happy with the results since they involve decrypting the data drive on startup using a LaunchDaemon (before any users have logged into the computer) essentially meaning that any user who logs onto the computer will see the unencrypted drive. I would like a method which will only unencrypted the data when an authorised user logs into the computer. As such, is there a way to do one of the following? Encrypt the entire data drive and only decrypt the drive when an authorised user logs into the computer. This would be equivalent behaviour to the Lion FileVault 2 feature, but on a secondary drive rather than the system drive. Encrypt only the user profile folder on the data drive, and only decrypt the folder when the user logs into the computer. This would be equivalent to the behaviour of FileVault 1 on previous versions of Mac OS X? I am happy to pay for a commercial third-party product that provides the required feature(s), but I have not yet been able to find one. Thanks in advance for any assistance.

    Read the article

  • EXCEL workbook, intermitently, takes 30 seconds to load

    - by Julio Nobre
    I am trying to figure out why a simple .XLS EXCEL workbook is taking, randomly, 30 seconds to open. Before answering: Please, bear mind the following: Problem symptoms Hanging is intermitent and it takes exactly 30 seconds; During hanging there is no cpu or disk activity; It only happens during workbook load. Every runs smooth after that; Windows Explorer.exe hangs on folder, but all other folders, system and applications are still responsive; There are no consecutive hangings. I have to wait for while to reproduce this behaviour; All workbooks where located on a local drive (C:\BPI); The workbook has no macros and no addins; Office 2003 is being used for several years; The computer is running Windows XP; Computer has several network mapped drives, all addressed to main file server; Recently, main fileserver was replaced by Windows 2011 SBS Standard Edition What I have done so far I have traced machine Explorer.exe, using Process Monitor, added Duration column, and filtered by Duration 1. That's is how I found that hanging was taking exactly 30 seconds. For further information, please refer to Oliver Salzburg tutorial. Using Process Monitor, I have also figured out than five operations were taking most of sample collecting duration. Looking at sample image below, column Operation below you will notice that one single operation was taking 29 seconds; I have tried different workbooks (all of them smaller than 30 KB); I have, temporarily, removed all shortcuts on User Document's folder that were pointing to network drives or shares; I have runned CCleaner to fix registry issues; I made sure that there were no external links on tested workbooks; I have reproduced this behaviour for hours; I have extensivelly researched for hours on the web; Process Monitor's collected and filtered data

    Read the article

  • E-mail duplication problem

    - by Gavin Osborn
    I have taken out a hosting agreement with a well respected hosting provider for a couple of internet facing servers. We have deployed several applications to these servers which send various e-mails back to us for reporting purposes. Context: Each server runs Windows Server 2003 R2 with the IIS 6.0 SMTP service installed. Each application is configured to use the local instance of IIS to send e-mails. The external IP address of each server is mapped to a particular domain eg: server1.mydomain.com server2.mydomain.com These e-mails are sent from a company domain name and not the domain name of the hosted servers (eg: [email protected]) Symptoms: A small number (<1%) of e-mails sent from these applications appear to be duplicated. These are exact duplicate in terms of both content and message headers. The Fix: I contacted my hosting provider and they told me this was a common problem & instructed me to: Change the HELO response of your mail server service to a FQDN (server1.mydomain.com && server2.mydomain.com) Create a DNS A record that resolves the FQDN of your mail server to the primary IP address of your sending mail server. Create a PTR record that resolves your primary IP address back to your mail server's FQDN In the sending domain's (mycompanydomain.com) DNS zone file, add the appropriate SPF record for your hosted servers. eg: v=spf1 a mx include:mydomain -all The Problem Continues: I made all of the changes as prescribed above, I was a little hesitant because these steps seemed to suggest they were more for stopping your messages getting blocked than they were for stopping them from being duplicated - but I am certainly no expert in these matters. It has been 5 days since I applied this fix and the problem still persists. I am certain that these problems are not a bug in the software because they are 4 different applications installed on 2 different servers, all of whom are exhibiting this strange behaviour. This behaviour has also not been seen in our UAT environment. Were my hosts correct to suggest this fix? If not, does anyone know what could be the cause of this problem? Many Thanks

    Read the article

  • NFS confusion - writing many small files

    - by Antonis Christofides
    I have a Debian squeeze amd64 which is at the same time a NFS4 server and client (it mounts itself through NFS4). The local directory that leads directly to disk is /nfs4exports/mydir, whereas /nfs4mounts/mydir is the same thing mounted through NFS, using the machine's external IP address. Here is the line from fstab: 176.9.116.102:/mydir /nfs4mounts/mydir nfs4 soft 0 0 I have an application that writes many small files. If I write directly to /nfs4exports/mydir, it writes thousands of files per second; but if I write to /nfs4mounts/mydir, it writes 4 files per second or so. I can greatly increase speed if I add async to /etc/exports. (Writing a single large file to the NFS directory goes at more than 100 MB/s.) I am confused by the description of async in NFS. If my application accesses the local directory, system calls like write and close return even if caches have not been flushed to permanent storage. Apparently this is not true with NFS sync behaviour. However, with NFS async behaviour, even calls like fsync are ignored. Isn't it possible to work like local files, i.e. generally work asynchronously, but honour fsync and O_SYNC?

    Read the article

  • IPv6 working fine, IPv4 throws OpenSSL error

    - by jippie
    I am building a webserver ( http://blog.linformatronics.nl/ ), which functions just fine on both IPv4 and IPv6 and when using a non-SSL connection. However when I connect to it through https, IPv6 works as expected, but an IPv4 connection throws a client side error. Server side logs are empty for the IPv4/https connection. Summarized in a table: | http | https -----+-------+------------------------------------------------------- IPv4 | works | OpenSSL error, failed. No server side logging. -----+-------+------------------------------------------------------- IPv6 | works | self signed certificate warning, but works as expected Apparently the SSL tunnel isn't even set up, which accounts for the Apache logs being empty. But why does it work fine for IPv6 and fail for IPv4? My question is why is this OpenSSL error being thrown and how can I solve it? Below is some extra information about the setup. IPv6 https Command used to reproduce IPv6/https behaviour: $ wget --no-check-certificate -O /dev/null -6 https://blog.linformatronics.nl --2012-11-03 15:46:48-- https://blog.linformatronics.nl/ Resolving blog.linformatronics.nl (blog.linformatronics.nl)... 2001:980:1b7f:1:a00:27ff:fea6:a2e7 Connecting to blog.linformatronics.nl (blog.linformatronics.nl)|2001:980:1b7f:1:a00:27ff:fea6:a2e7|:443... connected. WARNING: cannot verify blog.linformatronics.nl's certificate, issued by `/CN=localhost': Self-signed certificate encountered. WARNING: certificate common name `localhost' doesn't match requested host name `blog.linformatronics.nl'. HTTP request sent, awaiting response... 200 OK Length: 4556 (4.4K) [text/html] Saving to: `/dev/null' 100%[=======================================================================>] 4,556 --.-K/s in 0s 2012-11-03 15:46:49 (62.5 MB/s) - `/dev/null' saved [4556/4556] IPv4 https Command used to reproduce IPv6/https behaviour: $ wget --no-check-certificate -O /dev/null -4 https://blog.linformatronics.nl --2012-11-03 15:47:28-- https://blog.linformatronics.nl/ Resolving blog.linformatronics.nl (blog.linformatronics.nl)... 82.95.251.247 Connecting to blog.linformatronics.nl (blog.linformatronics.nl)|82.95.251.247|:443... connected. OpenSSL: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol Unable to establish SSL connection. Notes I am on Ubuntu Server 12.04.1 LTS

    Read the article

  • HTTPS/HTTP redirects via .htaccess

    - by Winston
    I have a somehow complicated problem I am trying to solve. I've used the following .htaccess directive to enable some sort of Pretty URLs, and that worked fine. For example, http://myurl.com/shop would be redirected to http://myurl.com/index.php/shop, and that was well working (note that stuff such as myurl.com/css/mycss.css) does not get redirected: RewriteEngine on RewriteCond ${REQUEST_URI} !^(index\.php$) RewriteCond %{SCRIPT_FILENAME} !-f RewriteCond %{SCRIPT_FILENAME} !-d RewriteRule ^/?(.*)$ index.php/$1 [L] But now, as I have introduced SSL to my webpage, I want the following behaviour: I basically want the above behaviour for all pages except admin.php and login.php. Requests to those two pages should be redirected to the HTTPS part, whereas all other requests should be processed as specified above. I have come up with the following .htaccess, but it does not work. h*tps://myurl.com/shop does not get redirected to h*tp://myurl.com/index.php/shop, and h*tp://myurl.com/admin.php does not get redirected to h*tps://myurl.com/admin.php. RewriteEngine on RewriteCond %{HTTPS} on RewriteCond %{REQUEST_URI} !^(admin\.php$|login\.php$) RewriteRule ^(.*)$ http://%{HTTP_HOST}/${REQUEST_URI} [R=301,L] RewriteCond %{HTTPS} off RewriteCond %{REQUEST_URI} ^(admin\.php$|login\.php$) RewriteRule ^(.*)$ https://myurl.com/%{REQUEST_URI} [R=301,L] RewriteCond %{REQUEST_URI} !^(index\.php$) RewriteCond %{SCRIPT_FILENAME} !-f RewriteCond %{SCRIPT_FILENAME} !-d RewriteRule ^/?(.*)$ index.php/$1 [L] I know it has something to do with rules overwriting each other, but I am not sure since my knowledge of Apache is quite limited. How could I fix this apparently not that difficult problem, and how could I make my .htaccess more compact and elegant? Help is very much appreciated, thank you!

    Read the article

  • D-LINK DIR-615 router keeps giving my wireless devices bad ip addresses

    - by mlsteeves
    I have a D-LINK DIR-615 router, and wired devices have no problem getting an IP, however; wireless devices end up with a 169.254.. address (subsequently, they cannot access the internet through the router). I have removed all wired connections from the router, so there is no other dhcp server running. I've also gone back to the store, and replaced it with another, thinking that maybe it was defective. According to the router, it gave 192.168.0.101 to the wireless device. According to the wireless device it got 169.254.67.71. I've tried both a laptop and an iPod Touch, both exhibit the same behaviour. Has anyone seen this type of behaviour, or have any ideas of stuff to try? NEW INFORMATION I looked at the logs on the router, and when the wireless device tries to connect, this is what is logged: Sep 10 18:13:39 UDHCPD sending OFFER of 192.168.0.111 Sep 10 18:13:31 UDHCPD sending OFFER of 192.168.0.111 Sep 10 18:13:26 UDHCPD sending OFFER of 192.168.0.111 Sep 10 18:13:23 UDHCPD sending OFFER of 192.168.0.111 Sep 10 18:13:21 UDHCPD sending OFFER of 192.168.0.111 I connected a computer directly to the router, and here is what it looks like: Sep 10 18:14:18 UDHCPD Inform: add_lease 192.168.0.110 Sep 10 18:14:14 UDHCPD sending ACK to 192.168.0.110 Sep 10 18:14:14 UDHCPD sending OFFER of 192.168.0.110 Not sure if that helps or not.

    Read the article

  • Queries passed to SQL Server are getting corrupted

    - by adrianbanks
    We are experiencing a bizarre error with our application at a customer site. We have managed to narrow it down to the point where we can replicate the behaviour using just Management Studio and SQL Server. We have two machines, A and B: +------------+ +--------------------+ | [A] | | [B] | | Management | -------------- | SQL Server 2008 R2 | | Studio | | Enterprise x64 | +------------+ +--------------------+ We are running a SQL script in Management Studio on machine A against the SQL Server instance on machine B. We are not actually executing the script, just parsing it. Most of the time, the parse operation works fine. Occasionally (seemingly randomly), the parse operation fails with a syntax error. The error message shows the part of the script with the error, which appears as some SQL from the original script that has been truncated and has random characters appended to it. An example: The original SQL: SELECT DISTINCT ST.TABLE_NAME as TableName FROM INFORMATION_SCHEMA.TABLES AS ST INNER JOIN INFORMATION_SCHEMA.COLUMNS AS SC ON SC.TABLE_NAME = ST.TABLE_NAME WHERE ST.TABLE_TYPE = 'BASE TABLE' AND SC.COLUMN_NAME = 'Identity' AND ST.TABLE_NAME != 'dtproperties' ORDER BY ST.TABLE_NAME The SQL that is in error (as reported by SQL Server): SELECT DISTINCT ST.TABLE_NAME as TableName FROM INFORMATION_SCHEMA.TABLES AS ST INNER JOIN INFORMATION_SCHEMA.COLUMNS AS SC ON SC.TABLE_NAME = Sa? The above example shows how the query is being corrupted. It doesn't always happen, and is not always the same bit of SQL that causes the error. Parsing this script against another SQL Server instance produces no errors, showing that the script is fine. It appears that something is corrupting the SQL that is being received the the server. This leads me to think that the problem lies either with the client end or in the transmission of the SQL from the client to the server. I have a SQL trace from the period where an error occurs, which shows the SQL has been corrupted when SQL Server receives it. We have been unable to track down any possible cause of this behaviour, and so cannot find a fix. Because the errors occur seemingly randomly, it is also very hard to generate reproduction steps to submit a bug report. Any ideas?

    Read the article

  • Scripting a permanent CTRL / CAPS swap in Gnome?

    - by Duncan Bayne
    I have a bash script that I use to configure a vanilla Ubuntu (10.10 Maverick Meerkat) installation to be exactly the way I want it. I make extensive use of gconftool-2 to configure the desktop, set up shortcut keys, etc. Now, I'm trying to swap the CTRL and CAPS keys. I have found two ways of doing this: In Gnome, go to System - Preferences - Keyboard - Layout - Options and make the change in there. This works well, but I don't know how to script this; the setting doesn't seem to be stored in the usual place as I can't find it with gconf-editor. Add the line setxkbmap -option "ctrl:swapcaps" to my .bashrc file. That works too, until I suspend the machine & then resume it. At that point the CTRL and CAPS behaviour return to normal, until I cause .bashrc to be run again by opening a new shell. This behaviour has been reported as a bug in RedHat. Could someone please suggest a way of switching those keys that is both permanent, and can be scripted? I'm sure I must be missing something obvious here ...

    Read the article

  • Host system resets (crashes) when using VMWare or VirtualBox and 64-bit guest systems

    - by sinni800
    I have been trying to install virtual systems on VMWare for a while now and encountered strange behaviour from my PC. The behaviour is as follows: On "automatic" virtualization mode it either outputs a cryptic error message (can MAYBE give later, if I can reproduce) right on startup (before even the BIOS) or it resets the complete HOST system (blackscreen, bios...) If I install a Windows XP on it it works well on "binary translation" mode. If I try installing Linux on it, in "binary translation" mode it crashes 1 or 2 seconds after I hit enter on the GRUB selection screen (after the first page of kernel messages rolled in) Using VirtualBox it crashes right in the BIOS. It gave me a Bluescreen though! 0x00000101: CLOCK_WATCHDOG_TIMEOUT: a clock interrupt was not received on a secondary processor within the allocated time interval NEWS: I tried VirtualBox again and it did not completely crash the computer this time. It gave me a critical error and a log file: http://pastebin.com/yKZSDs91 In conclusion, it will crash instantly if VT-x is activated. If not, it seemingly only crashed if I try to install something with 64 bits. Another update: Yes, it ONLY crashes when the guest is 64 bit! What I tried: Reinstalling Windows (my Windows installation was quite broken so it seemed natural. Didn't work though.) New BIOS What I am certain of: Virtualization extensions are activated in the BIOS What my computer specs are: ASUS P8P67 LE mainboard, newest BIOS/EFI firmware Intel Core i5 2500k Ati Radeon HD 5770 16 GB Corsair 1333mhz DDR3 RAM, 4 X 4 GB

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >