Search Results

Search found 4220 results on 169 pages for 'generating passwords'.

Page 124/169 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • Skip Corrupt Revisions During SvnAdmin Load

    - by cisellis
    I have a dump file that I am generating from VSS with the use of the VSS2SVN script. I've tested the generated dump file before and some of the revisions are corrupt for one reason or another (binary data or long path strings seem to be the main culprit). This is fine. In the past I have used svndumpfilter to split the dump file, remove the corrupt revisions and continue to load the repository. It worked but took a lot of manual effort to start the load, hit the bad revision, split the dump file, continue loading the repo, etc. This dump file is pretty large (~5GB) and takes several hours to load. I think I know the answer to this but is there any way to simply tell svnadmin load to keep going and skip corrupt revisions? I know how to verify, backup, etc. the dump file and don't need any of that. I don't care about recovering corrupt revisions. I just want to start the load, walk away, and not worry about checking it every few hours to manually remove the corrupt revisions. Is that possible? Thanks.

    Read the article

  • New Rapid Install StartCD 12.2.0.48 for EBS 12.2 Now Available

    - by Max Arderius
    A new Rapid Install startCD (Patch 18086193) for Oracle E-Business Suite Release 12.2 is now available. We recommend that all EBS customers installing or upgrading to EBS 12.2 use this latest update. The startCD updates are distributed to customers via My Oracle Support Patch which can be uncompressed on top of any previous 12.2 startCD under the main staging area. This patch replaces any previous startCDs. What's New in This Update? This new startCD version 12.2.0.48 includes important fixes for multi-node Installs, RAC, pre-install checks, platform specific issues, and upgrade scenario failures: 18703814 - QREP:122:RI:ISSUE WITH CHECKOS.CMD 18689527 - QREP:122:RI:ISSUE WITH FNDCORE.DLL SHIPPED AS PART OF R122 PACKAGE 18548485 - QREP1224:4:JAR SIGNER ISSUE DUE TO THE RI UPGRADE AUTOCONFIG CHANGES 18535812 - QREP:1220.48_4: 12.2.0 UPGRADE FILE SYSTEM LAY OUT IS AFFECTING THE DB TABLES 18507545 - WIN: UNABLE TO LAY DOWN FS PRIOR TO 12.2 UPGRADE WITHOUT AFFECTING RUNNING DB 18476041 - UNABLE TO LAY DOWN FS PRIOR TO 12.2 UPGRADE WITHOUT AFFECTING PRODUCTION DB 18459887 - R12.2 INSTALLATION FAILURE - OPMNCTL: NOT FOUND 18436053 - START CD 48_4 - ISSUES WITH TEMP SPACE CHECK 18424747 - QREP1224.3:ADD SERVER BROWSE BUTTON NOT WORKING 18421132 - *RW-50010: ERROR: - SCRIPT HAS RETURNED AN ERROR: 1 18403700 - QREP122.48:RI:UPGRADE RI PRECHECK HUNG IN SPLIT TIER APPS NODE ( NO SILENT ) 18383075 - ADD VERBOSE OPTION TO RAC VALIDATION 18363584 - UPTAKE INSTALL SCRIPTS FOR XB48_4 18336093 - QREP:122:RI:PATCH FS ADMIN SERVICE RUNNING AFTER RI UPGRADE CONFIGURE MODE 18320278 - QREP:1224.3:PLATFORM SPECIFIC SYNTAX ERRORS WITH DATE COMMAND IN DB CHECKER 18314643 - DISABLE SID=DB_NAME FOR RI UPGRADE FLOW IN RAC 18298977 - RI: EXCEPTION WHILE CLICKING RAC NODES BUTTON ON A NON-RAC SERVER 18286816 - QREP122:STARTCD48_3:TRAVERSING FROM VISION PASSW SCREEN TO PROD 18286371 - QREP122:STARTCD48_3:AMBIGUOUS MESSAGE DURING STAGE AREA CHECK ON HP 18275403 - QREP122:48:RI UPGRADE WITH EOH POST CHECKS HANGS IN SPLIT TIER DB NODE 18270631 - QREP122.48:MULTI-NODE RI USING NON-DEFAULT PASSWORDS NOT WORKING 18266046 - QREP122:48:RI NOT ALLOWING TO IGNORE THE RAC PRE-CHECK FAILURE 18242201 - UPTAKE TXK INSTALL SCRIPTS AND PLATFORMS.ZIP INTO STARTCD XB48_3 18236428 - QREP122.47:RI UPGRADE EXISTING OH FOR NON-DEFAULT APPS PASSWORD NOT WORKING 18220640 - INCONSISTENT DATABASE PORTS DURING EBS 12.2 INSTALLATION FOR STARTCD 12.2.0.47 18138796 - QREP122:47:RI 10.1.2 TECHSTACK NOT WORKING IF WE RUN RI FROM NEW STARTCD LOC 18138396 - TST1220: CONTROL FILE NAMING IN RAPID INSTALL SEEMS TO HAVE ISSUES 18124144 - IMPROVE HANDLING ERRORS FOUND IN CLUVFY LOG DURING PREINSTALL CHECKS 18111361 - VALIDATE ASM DB DATA FILES PATH AS +<DATA GROUP>/<PATH> 18102504 - QREP1220.47_5: UNZIP PANEL DOES NOT CREATE THE CORRECT STAGE 18083342 - 12.2 UPGRADE JAVA.NET.BINDEXCEPTION: CANNOT ASSIGN REQUESTED ADDRESS 18082140 - QREP122:47:RAC DB VALIDATION IS FAILS WITH EXIT STATUS IS 6 18062350 - 12.2.3 UPG: 12.2.0 INSTALLATION LOGS 18050840 - RI: UPGRADE WITH EXISTING RAC OH:SECONDARY DB NODE NAME IS BLANK 18049813 - RAC LOV DEFAULTS NOT SAVED UNLESS "SELECT" IS CLICKED 18003592 - TST1220:ADDITIONAL FREE SPACE CHECK FOR RI NEEDS TO BE CHECKED 17981471 - REMOVE ASM SPACE CHECK FROM RACVALIDATIONS.SH 17942179 - R12.2 INSTALL FAILING AT ADRUN11G.SH WITH ERRORS RW-50004 & RW-50010 17893583 - QREP1220.47:VALIDATION OF O.S IN RAPIDWIZ IN THE DB NODE CONFIGURATION SCREEN 17886258 - CLEANUP FND_NODES DURING UPGRADE FLOW 17858010 - RI POST INSTALL CHECKS (SSH VERIFICATION) STEP IS FAILING 17799807 - GEOHR: 12.2.0 - ERRORS IN RAPIDWIZ AND ADCONFIG LOGS 17786162 - QREP1223.4:RI:SERVICE_NAMES IS PRINTED AS SERVICE_NAME IN RI SCREEN 17782455 - RI: CONFIRM DEFAULT APPS PASSWORD IN SILENT MODE KICKOFF 17778130 - RI:ADMIN SERVER TO BE UP ON PRIMARY MID-TIER IN MULTI-NODE UPGRADE FS CREATION 17773989 - UN-SUPPORTED PLATFORM SHOWS 32 BIT AS HARD-CODED 17772655 - RELEVANT MESSAGE DURING THE RAPDIWIZ -TECHSTACK 17759279 - VERIFICATION PANEL DOES NOT EXPAND TECHNOLOGY STACK 17759183 - BUILDSTAGE SCRIPT MENU NEEDS TO BE ADJUSTED 17737186 - DATABASE PRE-REQ CHECK INCORRECTLY REPORTS SUCCESS ON AIX 17708082 - 12.2 INSTALLATION - OS PRE-REQUISITES CHECK 17701676 - TST122: GENERATE WRONG S_DBSID FOR PATCH FILE SYSTEM AT PHASE PREPARE 17630972 - /TMP PRE-REQ INSTALLATION CHECK 17617245 - 12.2 VISION INSTALL FAILS ON AIX 17603342 - OMCS: DB STAGING COMPLAINS WHILE MOVING IT TO FINAL LOCATION 17591171 - OMCS: DB STAGING FAILS WITH FRESH INSTALL R12.2 17588765 - CHECKER VERSION AND PLUGIN VERSION 17561747 - BUILDSTAGE.SH FAILS WITH ERROR WHEN STAGE HOSTED ON 32BIT LINUX 17539198 - RAPID INSTALL NEEDS TO IGNORE NON-REQUIRED STAGE ELEMENTS 17272808 - APPS USERS THAT HAVE DEFAULT PASSWORD AFTER 12.2 RAPID INSTALL References 12.2 Documentation Library 1581299.1 : EBS 12.2 Product Information Center 1320300.1 : Oracle E-Business Suite Release Notes, Release 12.2 1606170.1 : Oracle E-Business Suite Technology Stack and Applications DBA Release Notes for Release 12.2.3 1624423.1 : Oracle E-Business Suite Technology Stack and Applications DBA Release Notes for R12.TXK.C.Delta.4 and R12.AD.C.Delta.4 1594274.1 : Oracle E-Business Suite Release 12.2: Consolidated List of Patches and Technology Bug Fixes Related Articles Oracle E-Business Suite 12.2 Now Available startCD options to install Oracle E-Business Suite Release 12.2

    Read the article

  • User-friendly program to create editable & searchable pdf files, like tax & application forms and su

    - by Nick Gorbikoff
    Hello. Can somebody recommend user-friendly program that will create ( or convert from Excel & Word, or OpenOffice) editable pdf forms. You know like tax forms, that some of us filled out. Where you can create a form with predefined format and stationary, but let user edit/fill out fields. I need something user-friendly that a regular person can use. I'm NOT looking for a pdf library ( I already use wkhtmltopdf for generating pdfs programmaticaly). The reason is that we have about 400 documents ( internal expense forms, traing forms, etc) in .doc and .xls format that we want to convert to editable pdf ( so that people don't have to fill them out by hand). Coding 400 templates and then converting them using some lib or command line tool - is not my idea of fun, espsecially since those form change all the time. I'd like to just give HR and Quality department the tool, so that they can maintain those documents. I looked at everything listed on this page ( http://www.cogniview.com/convert-pdf-to-excel/post/pdf-editing-creation-50-open-sourcefree-alternatives-to-adobe-acrobat/ ), but can't find what I need. Thank you!!!

    Read the article

  • Azure's Ubuntu 12.0.4 fails to install PHP5

    - by Alex Kennberg
    Similar to this article from Azure themselves: http://www.windowsazure.com/en-us/manage/linux/common-tasks/install-lamp-stack/ I am trying to install PHP5 on Ubuntu 12.0.4 virtual machine. However, it fails installing the ssl-cert. $ sudo apt-get install php5 Reading package lists... Done Building dependency tree Reading state information... Done php5 is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 49 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up ssl-cert (1.0.28) ... Could not create certificate. Openssl output was: Generating a 2048 bit RSA private key ............................+++ ...................................................................................................................+++ writing new private key to '/etc/ssl/private/ssl-cert-snakeoil.key' ----- problems making Certificate Request 140320238503584:error:0D07A097:asn1 encoding routines:ASN1_mbstring_ncopy:string too long:a_mbstr.c:154:maxsize=64 dpkg: error processing ssl-cert (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: ssl-cert E: Sub-process /usr/bin/dpkg returned an error code (1) Any tips appreciated.

    Read the article

  • Is there an IE8 setting or policy to make it work like IE7 with respect to persistent connections?

    - by Stephen Pace
    I am working with a commercial application running on XP using IIS 5.1. Periodically the application is returning an IIS error "There are too many people accessing the Web site at this time." This is caused by Microsoft artificially limiting the number of connections (10) under IIS 5.1 under Windows XP, but in this case, there is really only one user (albeit a few tabs open at a time). Microsoft suggests you can reduce the problem by turning off HTTP Keep-Alives for that particular web site: http://support.microsoft.com/kb/262635 If you use IIS 5.0 on Windows 2000 Professional or IIS 5.1 on Microsoft Windows XP Professional, disable HTTP keep-alives in the properties of the Web site. When you do this, a limit of 10 concurrent connections still exists, but IIS does not maintain connections for inactive users. I may do that; however, I'm worried about performance degradation. However, I also notice that IE8 appears to handle this differently than IE7. By default, IE6 and IE7 use 2 persistent connections while IE8 uses 6. Perhaps in this case IE8 itself is generating multiple connections in an attempt to be faster, but those additional connections are overwhelming the artificially limited IIS 5.1 on XP? Assuming that is the case, is there an Internet Explorer option, registry setting, or policy I can set to force IE8 to behave like IE7 with respect to persistent connections? I would not set this for all users, but for the small number of users that used this application, it might solve their intermittent problem until the application can be rehosted on Windows Server 2008. Thanks.

    Read the article

  • Dos/ Flood Lag even though Port not Saturated

    - by Asad Moeen
    My GameServers had been under some UDP Floods due to which they generated outputs to the attacker which gave the GameServers some huge lags. Thanks to friends at ServerFault that upon different kind of testing, I was able to successfully block the attack. My question is actually something else but it is important to know how the GameServers reacted to the attack and if the machine kept stable or not: 300kb/s Input would cause GameServer to generate 2mb/s Output. So as the Input Rate kept increasing, output rate would reach so high that it would no longer be possible for the GameServer to control it and hence it would give a huge Lag until the attack is stopped. Usually the game server starts to lag when it sends out something greater than 5mb/s and under that is controllable. Theoretically, I was able to receive a 60mb/s output from my GameServer on inputting 10mb/s. Its just the way the GameServer works if not protected. Now on some of my machines, only the GameServer under attack lagged and although the server was generating 60mb/s output, rest of the gameservers on other ports would run fine without lags on the same machine. But there was another machine which also runs on a 100 MBPS Network port, even 1 mbps input ( and ZERO output because attack is blocked ) even on an unused port would give a constant yellow line ( on the Lag-o-Meter ) to all the clients on all GameServers indicating lag because that line is actually blue under normal conditions. It would remain the same even on 50mbps or 900mbps input. I tried contacting the host about it because I believe its the way their Network is bridged, but they can't help me about it. Anyone else knowing about such issues because if 900mbps input does not Saturate the port, how can 1mbps input lag the servers although port is not saturated and enough bandwidth is available?

    Read the article

  • Cannot find FIS partition 'initramfs'......... need help!!!

    - by vikramtheone
    Hi Guys, I have a Ubuntu 9.04 Linux running on Freescale's i.MX515(ARM Cortex based) board with me. There were about 250 updates pending and I did that today, well some of the updates failed because of the infamous errors: E: dpkg was interrupted, you must manually run 'sudo dpkg --configure -a' to correct the problem. E: Couldn't rebuild package cache E: dpkg was interrupted, you must manually run 'sudo dpkg --configure -a' to correct the problem. So, when I do the 'sudo dpkg --configure -a' I get new errors related to FIS partition: Cannot find FIS partition 'initramfs' User postinst hook script [/usr/sbin/flash-kernel] exited with value 1 dpkg: error processing linux-image-2.6.28-18-imx51 (--configure): subprocess post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of linux-image-imx51: linux-image-imx51 depends on linux-image-2.6.28-18-imx51; however: Package linux-image-2.6.28-18-imx51 is not configured yet. dpkg: error processing linux-image-imx51 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-imx51: linux-imx51 depends on linux-image-imx51 (= 2.6.28.18.23); however: Package linux-image-imx51 is not configured yet. dpkg: error processing linux-imx51 (--configure): dependency problems - leaving unconfigured Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-2.6.28-18-imx51 Cannot find FIS partition 'initramfs' dpkg: subprocess post-installation script returned error exit status 1 Whats going wrong here, need help!!! I'm a newbie. Regards Vikram

    Read the article

  • check_snmp warning & critical thresholds with negative values

    - by Oesor
    I'm querying some signal level values measured in dBm, and the SNMP host on the remove device reports the values as negative values, ie, -90 dBm. However, check-snmp seems to be incapable of dealing with negative numbers as part of its threshold values. If I specify the values as part of a collection of OIDs, it accepts the syntax but converts the snmp value to positive, thus always generating a WARNING/CRITICAL result: root@ops-00:/usr/local/nagios/libexec# ./check_snmp -H 192.168.1.100 -o DEVICE-MIB::AverageReceiveSNR.0,DEVICE-MIB::CurrentNoiseFloor.0 -w 10:,~:-85 -c 15:,~:-80 -vvvv /usr/bin/snmpget -t 1 -r 5 -m ALL -v 1 [authpriv] 192.168.1.100:161 DEVICE-MIB::AverageReceiveSNR.0 DEVICE-MIB::CurrentNoiseFloor.0 DEVICE-MIB::AverageReceiveSNR.0 = INTEGER: 25 DEVICE-MIB::CurrentNoiseFloor.0 = INTEGER: -97 Processing line 1 oidname: DEVICE-MIB::AverageReceiveSNR.0 response: = INTEGER: 25 Processing line 2 oidname: DEVICE-MIB::CurrentNoiseFloor.0 response: = INTEGER: -97 SNMP CRITICAL - 25 *97* | DEVICE-MIB::AverageReceiveSNR.0=25 DEVICE-MIB::CurrentNoiseFloor.0=97 If I run it with a single OID, it gives me an error that the format is incorrect: root@ops-00:/usr/local/nagios/libexec# ./check_snmp -H 192.168.1.100 -o DEVICE-MIB::CurrentNoiseFloor.0 -w ~:-85 -c ~:-80 -vvvv Range format incorrect And if I run it with no thresholds defined, it works properly and returns the right value. This makes the graphs correct, however it'll never generate a notification when out of range: root@ops-00:/usr/local/nagios/libexec# ./check_snmp -H 192.168.1.100 -o DEVICE-MIB::CurrentNoiseFloor.0 -vvvv /usr/bin/snmpget -t 1 -r 5 -m ALL -v 1 [authpriv] 192.168.1.100:161 DEVICE-MIB::CurrentNoiseFloor.0 DEVICE-MIB::CurrentNoiseFloor.0 = INTEGER: -97 Processing line 1 oidname: DEVICE-MIB::CurrentNoiseFloor.0 response: = INTEGER: -97 SNMP OK - -97 | DEVICE-MIB::CurrentNoiseFloor.0=-97 What am I doing wrong here? How would I, for example, generate a CRITICAL when the noise floor is -80 dBm or higher, a WARNING when it's -85 to -80 dBm, and an OK when -85 dBm or lower? Do I have to write my own SNMP plugins when dealing with negative values?

    Read the article

  • How to make ssh connection between servers using public-key authentication

    - by Rafael
    I am setting up a continuos integration(CI) server and a test web server. I would like that CI server would access web server with public key authentication. In the web server I have created an user and generated the keys sudo useradd -d /var/www/user -m user sudo passwd user sudo su user ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/var/www/user/.ssh/id_rsa): Created directory '/var/www/user/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /var/www/user/.ssh/id_rsa. Your public key has been saved in /var/www/user/.ssh/id_rsa.pub. However othe side, CI server copies the key to the host but still asks password ssh-copy-id -i ~/.ssh/id_rsa.pub user@webserver-address user@webserver-address's password: Now try logging into the machine, with "ssh 'user@webserver-address'", and check in: .ssh/authorized_keys to make sure we haven't added extra keys that you weren't expecting. I checked on the web server and the CI server public key has been copied to web server authorized_keys but when I connect, It asks password. ssh 'user@webserver-address' user@webserver-address's password: If I try use root user rather than my created user (both users are with copied public keys). It connects with the public key ssh 'root@webserver-address' Welcome to Ubuntu 11.04 (GNU/Linux 2.6.18-274.7.1.el5.028stab095.1 x86_64) * Documentation: https://help.ubuntu.com/ Last login: Wed Apr 11 10:21:13 2012 from ******* root@webserver-address:~#

    Read the article

  • What information must never appear in logs?

    - by MainMa
    I'm about to write the company guidelines about what must never appear in logs (trace of an application). In fact, some developers try to include as many information as possible in trace, making it risky to store those logs, and extremely dangerous to submit them, especially when the customer doesn't know this information is stored, because she never cared about this and never read documentation and/or warning messages. For example, when dealing with files, some developers are tempted to trace the names of the files. For example before appending file name to a directory, if we trace everything on error, it will be easy to notice for example that the appended name is too long, and that the bug in the code was to forget to check for the length of the concatenated string. It is helpful, but this is sensitive data, and must never appear in logs. In the same way: Passwords, IP addresses and network information (MAC address, host name, etc.)¹, Database accesses, Direct input from user and stored business data must never appear in trace. So what other types of information must be banished from the logs? Are there any guidelines already written which I can use? ¹ Obviously, I'm not talking about things as IIS or Apache logs. What I'm talking about is the sort of information which is collected with the only intent to debug the application itself, not to trace the activity of untrusted entities. Edit: Thank you for your answers and your comments. Since my question is not too precise, I'll try to answer the questions asked in the comments: What I'm doing with the logs? The logs of the application may be stored in memory, which means either in plain on hard disk on localhost, in a database, again in plain, or in Windows Events. In every case, the concern is that those sources may not be safe enough. For example, when a customer runs an application and this application stores logs in plain text file in temp directory, anybody who has a physical access to the PC can read those logs. The logs of the application may also be sent through internet. For example, if a customer has an issue with an application, we can ask her to run this application in full-trace mode and to send us the log file. Also, some application may sent automatically the crash report to us (and even if there are warnings about sensitive data, in most cases customers don't read them). Am I talking about specific fields? No. I'm working on general business applications only, so the only sensitive data is business data. There is nothing related to health or other fields covered by specific regulations. But thank you to talk about that, I probably should take a look about those fields for some clues about what I can include in guidelines. Isn't it easier to encrypt the data? No. It would make every application much more difficult, especially if we want to use C# diagnostics and TraceSource. It would also require to manage authorizations, which is not the easiest think to do. Finally, if we are talking about the logs submitted to us from a customer, we must be able to read the logs, but without having access to sensitive data. So technically, it's easier to never include sensitive information in logs at all and to never care about how and where those logs are stored.

    Read the article

  • ISC DHCPD IPv6 for multiple interfaces

    - by Seoman
    I want to assign multiple IPv6 to a server with multiple NIC. As IPv6 RFC defines, each server has a unique DUID that can have one of the 3 formats (LL, LLT or enterprise). And each NIC has an IAID. So a request from NIC1 its the DUID and the IAID of the NIC1 and the request from NIC2 its the same DUID but the IAID its different. The problem is that from a Centos box, when I ask for an IP in 2 different interfaces, I get the same IP. I can't find how to specify host entry based on DUID and the IAID. I see some people generating a unique DUID based on the MAC of the NIC but this is not IPv6 RFC says. What I tried is: host entry1 { host-identifier option dhcp6.client-id 00:01:00:01:19:fc:f8:1c:52:54:00:7e:c9:ec; option dhcp6.ia-na "00:09:40:5d"; fixed-address6 2001:db8:0:1::202; } host entry2 { host-identifier option dhcp6.client-id 00:01:00:01:19:fc:f8:1c:52:54:00:7e:c9:ec; option dhcp6.ia-na "00:7e:c9:ec"; fixed-address6 2001:db8:0:1::201; } This causes a Segmentation Fault in the client (what is scary...). I guess is not the right use for ia-na option but I don't see any other option.

    Read the article

  • Ubuntu 64bit Xen DomU Issues after upgrade from Karmic to Lucid

    - by Shoaibi
    I was upgrading my servers today and it all went fine except the last machine which has the following issues: [Resolved using http://www.ndchost.com/wiki/server-administration/upgrade-ubuntu-pre-10.04#post-1004-upgradefinal-steps] No login prompt on console Done. Begin: Mounting root file system... ... Begin: Running /scripts/local-top ... Done. [ 0.545705] blkfront: xvda: barriers enabled [ 0.546949] xvda: xvda1 [ 0.549961] blkfront: xvde: barriers enabled [ 0.550619] xvde: xvde1 xvde2 Begin: Running /scripts/local-premount ... Done. [ 0.870385] kjournald starting. Commit interval 5 seconds [ 0.870449] EXT3-fs: mounted filesystem with ordered data mode. Begin: Running /scripts/local-bottom ... Done. Done. Begin: Running /scripts/init-bottom ... Done. Also tried by pressing ENTER and CTRL+C many times, no use. Resolved: [/tmp was mounted as noexec, changing that fix it]: I get errors when i try to re-install udev in single user mode: Unpacking replacement udev ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot Processing triggers for man-db ... Setting up udev (151-12.1) ... udev start/running, process 1003 Removing `local diversion of /sbin/udevadm to /sbin/udevadm.upgrade' update-initramfs: deferring update (trigger activated) Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-2.6.32-25-server /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/local-premount/fixrtc: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/local-premount/ntfs_3g: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/local-premount/resume: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/nfs-top/udev: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/panic/console_setup: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/init-top/all_generic_ide: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/init-top/blacklist: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/init-top/udev: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/init-bottom/udev: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/local-bottom/ntfs_3g: Permission denied

    Read the article

  • 13 Lösungen für eine höhere Sicherheit in einer Oracle Datenbank (Best Practices)

    - by C.Muetzlitz
    Externe Einflüsse wie Gesetze fordern die IT auf, (unsere) Daten zu schützen. Doch wie prüft man die eingestellte Sicherheit einer Oracle Datenbank überhaupt? Ist die geforderte Sicherheit ausreichend umgesetzt und zwar im Idealfall entsprechend dem notwendigen Schutzbedarf? Wann haben Sie eigentlich die Sicherheit Ihrer Oracle Datenbank das letzte Mal überprüft? Und noch besser gefragt, kennen Sie die Bedrohungen und die davon abgeleiteten Risiken? Alles Fragen deren Antworten ein verantwortlicher Anwendungsbesitzer sofort parat haben sollte oder sehen Sie das anders? Wie kann man sich am besten vor Bedrohungen schützen? Die einzige richtige Antwort auf diese Frage ist, durch Informationen und daraus abgeleitetes Wissen. Nun umfassen Informationen und das darin versteckte Wissen wahrscheinlich sehr viele Quellen. D.h. es wird immer schwieriger sich das richtige Wissen anzueignen und dieses Wissen für den Schutz von Daten und Datenbanken anzuwenden.Betrachtet man die Oracle Datenbank, dann empfehle ich zwei wesentliche Bereiche, die man tun muss bzw. wissen sollte. Die Best Practices Lösungen kennen, die man implementieren sollte und teilweise muss, um gute Sicherheit zu garantieren.Ich nenne diesen Bereich „13 Lösungen für eine höhere Sicherheit in einer Oracle Datenbank (Best Practices)“ Wie sieht der wirkliche Sicherheitszustand einer Oracle Datenbank aus.Diesen Bereich nenne ich „Check Oracle DB Security“ In diesem Beitrag möchte ich Sie nun in die Grundlagen einer guten Oracle Datenbank Sicherheit einführen und Sie befähigen, den Sicherheitszustand Ihrer Datenbank selber bestimmen zu können. 13 Lösungen für eine höhere Sicherheit in einer Oracle Datenbank (Best Practices)“  Password-Management aktiveren:Seien Sie sich bewusst, dass schwache Passwords eine hohe Bedrohung bedeuten. Aktivieren Sie ein vernünftiges Password Management Kennen Sie den Funktionsumfang Ihrer aktuellen Datenbank Version, auch die Funktionen, die nicht mehr unterstützt werden.Der "New Feature und Upgrade Guide" sollte eine Pflichtlektüre werden. Implementieren Sie eine passende Mindestsicherheit.Oracle liefert hier viele Vorgaben. Haben Sie das Rollen- und Account Management im GriffHier geht es um eine kontrollierte Privilegien-Vergabe (Least Privileg), eine Zwecktrennung im Account Management und eine andauernde Überprüfung des Rollenmanagements und Zugriffskonzepts Sicheres Datenbank Link Konzept implementierenGerade im Bereich der Datenintegration werden wiederholt DB Links in der Datenbank konfiguriert. Diese Links eröffnen u.U. unkontrollierte Zugriffe auf entfernte Datenbanken. Tracken Sie den Zugriff und setzen Sie ein sicheres DB Link Konzept um. Oracle liefert hier die entsprechenden Vorgaben. Definieren Sie Schutz-Policies für Ihre Anwendungen.Hierunter fällt z.B. ein richtiges Anwendungs-Owner und Anwendungs-User Setup Implementieren Sie den notwendigen Datenschutz für wichtige DatenKennen Sie die Daten, die geschützt werden müssen und schützen Sie diese angemessen. Kontrollieren Sie den Ressourcenverbrauch in Ihrer Datenbank Implementieren Sie eine sinnvolle Zwecktrennung in der DatenbankAuch bei der Datenbank ist es sinnvoll eine Zwecktrennung zu implementieren. Schalten Sie eine sinnvolle und gesetzeskonforme Protokollierung ein.Gesetze erfordern das und Oracle gibt eine Mindestprotokollierung vor. Implementieren Sie Prozesse, die den guten Zustand der Datenbank erhalten Führen Sie regelmäßige Health- Checks durchOracle liefert z.B. mit dem Enterprise Manager eine vollständige Library. Definieren Sie ein funktionierendes Patch-ManagementKennen Sie die Critical Patch Updates und handeln Sie falls notwendig. Check Oracle DB Security oder wer den Sicherheitszustand nicht kennt, wird auch keine Maßnahmen ergreifen Den Sicherheitszustand einer Oracle Datenbank zu überprüfen, ist sehr wichtig. Hierfür kann man verschiedene Anwendungen nutzen, die im Markt erhältlich sind. Eine gute Entscheidung wäre z.B. den Oracle Enterprise Manager (Cloud Control) mit dem Lifecycle Management zu nutzen, der periodisch den Sicherheitszustand für Sie ermittelt. Eine manuelle Überprüfung ist auch möglich, erfordert aber tiefes Wissen. Doch auch trotz der hohen Wissensanforderung ist ein Verstehen, wie man eine Oracle Datenbank manuell auf Sicherheit überprüft, wichtig. Vertrauen Sie nicht mehr auf Vermutungen, sondern nehmen Sie die Sicherheit Ihrer Datenbank ernst und lernen Sie den realen Zustand Ihrer Datenbank kennen. Wissen über reale Zustände und Wissen über geeignete Konzepte schützen. Erst dann können Sie entscheiden, welche Maßnahmen tatsächlich notwendig sind. Weiterführende Informationen: Oracle Online Dokumentation für die Datenbank Verschiedene Artikel in der Knowledge Base vom Oracle Support Das neue Buch „Oracle Security in der Praxis. Vollständige Sicherheitsüberprüfung Ihrer Oracle Datenbank“.

    Read the article

  • how i can identify which process is making UDP traffic on linux?

    - by boos
    my machine is continously making udp dns traffic request. what i need to know is the PID of the process generating this traffic. The normal way in TCP connection is to use netstat/lsof and get the process associated at the pid. Is UDP the connection is stateles, so, when i call netastat/lsof i can see it only if the UDP socket is opened and it's sending traffic. I have tried with lsof -i UDP and with nestat -anpue but i cant be able to find wich process is doing that request because i need to call lsof/netstat exactly when the udp traffic is sended, if i call lsof/netstat before/after the udp datagram is sended is impossible to view the opened UDP socket. call netstat/lsof exactly when 3/4 udp packet is sended is IMPOSSIBLE. how i can identify the infamous process ? I have already inspected the traffic to try to identify the sended PID from the content of the packet, but is not possible to identify it from the contect of the traffic. anyone can help me ? I'm root on this machine FEDORA 12 Linux noise.company.lan 2.6.32.16-141.fc12.x86_64 #1 SMP Wed Jul 7 04:49:59 UTC 2010 x86_64 x86_64 x86_64 GNU/Linux

    Read the article

  • Thumbnail generation with imagemagick doesn't render the correct colors

    - by Bastien
    Generating thumbnails of PDFs with imagemagick sometimes renders incorrect colors. We're using an old version of imagemagick (6.5.7-8, that's the version installed on the heroku servers). Here is the command we're currently using: convert -size "725x1200>" -colorspace RGB -flatten -density 300 -quality 100 input.pdf output.jpg I've tried using different colorspaces like sRGB,YIQ,.. but none of them are rendering the color correctly. Using imagemagick-6.7.7-6 locally works so I've tried to bundle the 'convert' command within my application /bin directory, the command works but the result is still wrong, so it seems that the problem comes either from another imagemagick command used by 'convert' or from another library. Here's an example of the outputs: Correct output: http://i.stack.imgur.com/gf9eG.jpg Wrong output: http://i.stack.imgur.com/imUeD.jpg Strangely, with some pages of the same pdf the output is always correct. Any idea which library or command could be the issue, or if there is a proper set of options to pass to imagemagick to always get it right? Thanks in advance for your help.

    Read the article

  • SQL SERVER – Importance of User Without Login

    - by pinaldave
    Some questions are very open ended and it is very hard to come up with exact requirements. Here is one question I was asked in recent User Group Meeting. Question: “In recent version of SQL Server we can create user without login. What is the use of it?” Great question indeed. Let me first attempt to answer this question but after reading my answer I need your help. I want you to help him as well with adding more value to it. Answer: Let us visualize a scenario. An application has lots of different operations and many of them are very sensitive operations. The common practice was to do give application specific role which has more permissions and access level. When a regular user login (not system admin), he/she might have very restrictive permissions. The application itself had a user name and password which means applications can directly login into the database and perform the operation. Developers were well aware of the username and password as it was embedded in the application. When developer leaves the organization or when the password was changed, the part of the application had to be changed where the same username and passwords were used. Additionally, developers were able to use the same username and password and login directly to the same application. In earlier version of SQL Server there were application roles. The same is later on replaced by “User without Login”. Now let us recreate the above scenario using this new “User without Login”. In this case, User will have to login using their own credentials into SQL Server. This means that the user who is logged in will have his/her own username and password. Once the login is done in SQL Server, the user will be able to use the application. Now the database should have another User without Login which has all the necessary permissions and rights to execute various operations. Now, Application will be able to execute the script by impersonating “user without login – with more permissions”. Here there is assumed that user login does not have enough permissions and another user (without login) there are more rights. If a user knows how the application is using the database and their various operations, he can switch the context to user without login making him enable for doing further modification. Make sure to explicitly DENY view definition permission on the database. This will make things further difficult for user as he will have to know exact details to get additional permissions. If a user is System Admin all the details which I just mentioned in above three paragraphs does not apply as admin always have access to everything. Additionally, the method describes above is just one of the architecture and if someone is attempting to damage the system, they will still be able to figure out a workaround. You will have to put further auditing and policy based management to prevent such incidents and accidents. I guess this is my answer. I read it multiple times but I still feel that I am missing something. There should be more to this concept than what I have just described. I have merely described one scenario but there will be many more scenarios where this situation will be useful. Now is your turn to help – please leave a comment with the additional suggestion where exactly “User without Login” will be useful as well did I miss anything when I described above scenario. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Clarification on signals (sighup), jobs, and the controlling terminal

    - by asolberg
    So I've read two different perspectives and I'm trying to figure out which one is right. 1) Some sources online say that signals sent from the controlling terminal are ONLY sent to the foreground process group. That means if want a process to continue running in the background when you logout it is sufficient to simply suspend the job (ctrl-Z) and resume it in the background (bg). Then you can log out and it will continue to run because SIGHUP is only sent to the foreground job. See: http://blog.nelhage.com/2010/01/a-brief-introduction-to-termios-signaling-and-job-control/ ...In addition, if any signal-generating character is read by a terminal, it generates the appropriate signal to the foreground process group.... 2) Other sources claim you need to use the "nohup" command at the time the program is executed, or failing that, issue a "disown" command during execution to remove it from the jobs table that listens for SIGHUP. They say if you don't do this when you logout your process will also exit even if its running in a background process group. For example: http://docstore.mik.ua/orelly/unix3/upt/ch23_11.htm ...If I log out anyway, the shell sends my background job a HUP signal... In my own experiments with Ubuntu linux it seems like 1) is correct. I executed a command: "sleep 20 &" then logged out, logged back in and pressed did a "ps aux". Sure enough the sleep command was still running. So then why is it that so many people seem to believe number 2? And if all you have to do is place a job in the background to keep it running why do so many people use "nohup" and "disown?"

    Read the article

  • Apache22 on FreeBSD - Starts, does not respond to requests

    - by NuclearDog
    Hey folks! I'm running Apache 2.2.17 with the peruser MPM on FreeBSD 8.2-RC1 on Amazon's EC2 (so it's XEN). It was installed from ports. My problem is that, although Apache is running, listening for, and accepting connections, it doesn't actually respond to any or show them in the log at all. If I telnet to the port it's listening on and type out an HTTP request: GET / HTTP/1.1 Host: asdfasdf And hit enter a couple of times, it just sits there... Nothing. No response requesting with a browser either. There doesn't appear to be anything helpful in the error log: [Sun Jan 09 16:56:24 2011] [warn] Init: Session Cache is not configured [hint: SSLSessionCache] [Sun Jan 09 16:56:25 2011] [notice] Digest: generating secret for digest authentication ... [Sun Jan 09 16:56:25 2011] [notice] Digest: done [Sun Jan 09 16:56:25 2011] [notice] Apache/2.2.17 (FreeBSD) mod_ssl/2.2.17 The access log stays empty: root:/var/log# wc httpd-access.log 0 0 0 httpd-access.log root:/var/log# I've tried with accf_http and accf_data both enabled and disabled, and with both the stock configuration and my customized config. I also tried uninstalling apache22-peruser-mpm and just installing straight apache22... Still no luck. I tried removing all of the LoadModule lines from httpd.conf and just re-enabled the ones that were necessary to parse the config. Ended up with only the following loaded: root:/usr/local/etc/apache22# /usr/local/sbin/apachectl -M Loaded Modules: core_module (static) mpm_peruser_module (static) http_module (static) so_module (static) authz_host_module (shared) log_config_module (shared) alias_module (shared) Syntax OK root:/usr/local/etc/apache22# Same results. Apache is definitely what's listening on port 80: root:/usr/local/etc/apache22# sockstat -4 | grep httpd root httpd 43789 3 tcp4 6 *:80 *:* root httpd 43789 4 tcp4 *:* *:* root:/usr/local/etc/apache22# And I know it's not a firewall issue as there is nothing running locally, and connecting from the local box to 127.0.0.1:80 results in the same issue. Does anyone have any idea what's going on? Why it would be doing this? I've exhausted all of my debugging expertise. :/ Thanks for any suggestions!

    Read the article

  • Does MySQL have some kind of DoS protection or per-user query limit?

    - by Ghostrider
    I'm a bit at a loss. I'm running a MySQL database that's roughly 1GB data in indices combined on a dedicated Linux server. DB version is '5.0.89-community'. Configuration is controlled via cPanel. PHP actually runs elsewhere on a shared hosting. IP addresses are static and don't change. Access from remote IP address is properly configured. Website gets around 10K hits per day with each hit generating a a database query. Some of these queries are expensive (~1 sec execution time). All is fine and well until at some point DB server starts refusing connections from the client, claiming that specific user can't access the server from that IP. Resetting the server will always fix the problem for a day or two and then the same thing happens. There are some other DBs on that server, some of which are hit pretty hard on occasion but constantnly. One of the apps maintains several persistent connections since it does couple of updates per minute. Though I don't think it's related. What's driving me mad is that I can't figure out why server would start refusing connections. There is nothing in the logs. This server is a hosted dedicated server so hosting company created the OS image and I didn't write or go over every line of configuration. I'd do it but I'm at a loss as to where start looking. Any advice is appreciated.

    Read the article

  • Utility to Script SQL Server Configuration

    - by Bill Graziano
    I wrote a small utility to script some key SQL Server configuration information. I had two goals for this utility: Assist with disaster recovery preparation Identify configuration changes I’ve released the application as open source through CodePlex. You can download it from CodePlex at the Script SQL Server Configuration project page. The application is a .NET 2.0 console application that uses SMO. It writes its output to a directory that you specify.  Disaster Planning ScriptSqlConfig generates scripts for logins, jobs and linked servers.  It writes the properties and configuration from the instance to text files. The scripts are designed so they can be run against a DR server in the case of a disaster. The properties and configuration will need to be manually compared. Each job is scripted to its own file. Each linked server is scripted to its own file. The linked servers don’t include the password if you use a SQL Server account to connect to the linked server. You’ll need to store those somewhere secure. All the logins are scripted to a single file. This file includes windows logins, SQL Server logins and any server role membership.  The SQL Server logins are scripted with the correct SID and hashed passwords. This means that when you create the login it will automatically match up to the users in the database and have the correct password. This is the only script that I programmatically generate rather than using SMO. The SQL Server configuration and properties are scripted to text files. These will need to be manually reviewed in the event of a disaster. Or you could DIFF them with the configuration on the new server. Configuration Changes These scripts and files are all designed to be checked into a version control system.  The scripts themselves don’t include any date specific information. In my environments I run this every night and check in the changes. I call the application once for each server and script each server to its own directory.  The process will delete any existing files before writing new ones. This solved the problem I had where the scripts for deleted jobs and linked servers would continue to show up.  To see any changes I just need to query the version control system to show many any changes to the files. Database Scripting Utilities that script database objects are plentiful.  CodePlex has at least a dozen of them including one I wrote years ago. The code is so easy to write it’s hard not to include that functionality. This functionality wasn’t high on my list because it’s included in a database backup.  Unless you specify the /nodb option, the utility will script out many user database objects. It will script one object per file. It will script tables, stored procedures, user-defined data types, views, triggers, table types and user-defined functions. I know there are more I need to add but haven’t gotten around it yet. If there’s something you need, please log an issue and get it added. Since it scripts one object per file these really aren’t appropriate to recreate an empty database. They are really good for checking into source control every night and then seeing what changed. I know everyone tells me all their database objects are in source control but a little extra insurance never hurts. Conclusion I hope this utility will help a few of you out there. My goal is to have it script all server objects that aren’t contained in user databases. This should help with configuration changes and especially disaster recovery.

    Read the article

  • Why does Exchange 2003 silently reject emails with large attachments?

    - by Cypher
    Our environment: Exchange Server 2003 Standard, single instance, running on Windows Server 2003 Standard. configured to not send/receive mail with attachments larger than 10 MB. NDRs are not enabled. The issue: When an external sender sends an email with an attachment larger than 10MB, Exchange, as configured, does not receive the message. However, the sender of that message does not receive any notifications from his own mail server that the message could not be delivered due to attachment size. However, if an external user tries to send an email to a non-existent user, they do receive a message from their mail server indicating that the user does not exist. Why is that, and is there anything I can do about it? It would be nice if the sender received notification that the attachment file size exceeds our limits and their message was never received... Update The Exchange server has a SpamAssassin box in front of it... could that have something to do with it? Here is one of the last lines from SpamAssassin's logs when searching for my test e-mails: mail postfix/smtp[19133]: 2B80917758: to=, relay=10.0.0.8[10.0.0.8]:25, delay=4.3, delays=2.6/0/0/1.7, dsn=2.6.0, status=sent (250 2.6.0 Queued mail for delivery) My assumption is that Spam Assassin thinks the message is OK and is forwarding it off to Exchange. Update I've verified that Exchange is receiving the message and generating an NDR. However, delivery of NDRs are disabled to prevent Backscatter. Is there something that I can do to get Exchange to send a bounce message to the sending mail server (or verify that message is being sent) so the sending mail server can notify its sender of the bounce?

    Read the article

  • Cannot Access Local Network Shares (Strange Schannel and lsass.exe issues)

    - by Fake
    When I browse to my own computer's shares by going to \\MYCOMPUTERNAME\ ; I cannot access any of the shares on my LOCAL machine (nor can I access them remotely) and it generates about 40 of the following errors in my system event log: The following fatal alert was generated: 10. The internal error state is 1203. Details: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Schannel" Guid="{1F678132-5938-4686-9FDC-C8FF68F15C85}" /> <EventID>36888</EventID> <Version>0</Version> <Level>2</Level> <Task>0</Task> <Opcode>0</Opcode> <Keywords>0x8000000000000000</Keywords> <TimeCreated SystemTime="2011-04-05T13:52:09.144278900Z" /> <EventRecordID>79628</EventRecordID> <Correlation /> <Execution ProcessID="552" ThreadID="672" /> <Channel>System</Channel> <Computer>DEVELOP4.CONTOSO.COM</Computer> <Security UserID="S-1-5-18" /> </System> <EventData> <Data Name="AlertDesc">10</Data> <Data Name="ErrorState">1203</Data> </EventData> </Event> Additonal information: The process that is generating the error is lsass.exe OS: Windows7 Professional x64 Joined to Domain: Yes I was able to access the shares locally in the past I am having the same issue on 3 other computers that have similar configurations Any help would be greatly appreciated, because I have no idea what's wrong. Thanks!

    Read the article

  • Knowledge and user generated content management system to track files, research, proposals, etc.?

    - by Eshwar
    I'll try keep it short. Here's the scenario: We have employees all over the world performing similar work i.e. research, generating powerpoint slides, word documents, graphics, etc. Many times a lot of this previous work can be reused for another future project. The current arrangement is email and phone calls which as you would agree is quick if you know where to look but otherwise archaic and very very inefficient. So I am looking for software that will allow me to do the following: Tag files e.g. an investor presentation on cellphone usage in kenya would be tagged investor, cellphone, kenya Manage references e.g. if we read something on the internet, should be able to paste that link in some fashion and tag it as above. Preferably cloud based so that it can be accessed by anybody and additionally would be nice (though NOT must) to have access levels (director, manager, everyone) A nice interface that non technically savvy folks can warm up to ;) A desktop app would be handy so that people don't always have to click upload or something A tree based system is inefficient in this case because content is usually linked across branches and also people might not quite agree on one format of a tree. Tagging works around this very nicely. What I have considered so far: Evernote (for its more professional look) Springpad (for its versatility with content) Mendeley (this is a research manager and in some ways ideal, but i fear its limited to PDFs) The goal is that when somebody wants to look for a document, they don't have to ask a colleague, they can just search with keywords and all relevant information shows up. Thanks!

    Read the article

  • reset locale in debian under Squeeze

    - by si2w
    I have problems with locale in debian. I tried many thing but it doesn't anything for me : locale -a locale: Cannot set LC_CTYPE to default locale: No such file or directory C POSIX en_US.utf8 I try to set en_US.utf8 without success with this :dpkg-reconfigure locales -plow perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = "en_US", LC_ALL = (unset), LC_CTYPE = "UTF-8", LANG = (unset) are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory /usr/bin/locale: Cannot set LC_CTYPE to default locale: No such file or directory /usr/bin/locale: Cannot set LC_ALL to default locale: No such file or directory Generating locales (this might take a while)... en_US.UTF-8... done Generation complete. perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = "en_US", LC_ALL = (unset), LC_CTYPE = "UTF-8", LANG = (unset) are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = "en_US", LC_ALL = (unset), LC_CTYPE = "UTF-8", LANG = (unset) are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). After reboot, i try to use a perl script : perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = "en_US", LC_ALL = (unset), LC_CTYPE = "UTF-8", LANG = "en_US.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). Here is my /etc/default/locale config file : cat /etc/default/locale LANG=en_US.UTF-8 LANGUAGE=en_US Any idea to solve this (stupid) problem ? Thanks

    Read the article

  • Multiple Video Cards - Stuttering

    - by jstawski
    I have two video cards: - XFX PVT84JUDD3 GeForce 8600GT XXX 256MB 128-bit GDDR3 PCI Express x16 SLI Supported Video Card - EVGA 256-P1-N399-LX GeForce 6200 256MB 64-bit GDDR2 PCI Video Card both running the same set of drivers on Windows 7 64-bit. When I work with 2 monitors connected to the 8600GT card everything works smoothly. When I connect the third one to the 6200 then Windows works well and all of a suddon the screens turns black for up to 5 minutes. Then it goes back and at some random interval it goes black again. I can still see the pointer and hit CTRL+ALT+DEL and see the menu to log off, bring the task manager, etc. I've tried changing the 6200 to another PCI slot and the error persists. I've tried connecting 2 monitors only one to each card, same problem. Tried swapping them, mixed and matched the monitors to see if it was a problem with the monitor and my conclusion was that it is not the monitor. The problem also occurred with Vista 64 as well. What could be generating this problem? Can it be the fact that they are different interfaces? Maybe the Motherboard? Should I change something on the BIOS? What do you guys think?

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >