Search Results

Search found 22569 results on 903 pages for 'win32 process'.

Page 729/903 | < Previous Page | 725 726 727 728 729 730 731 732 733 734 735 736  | Next Page >

  • File descriptor linked to socket or pipe in proc

    - by primero
    i have a question regarding the file descriptors and their linkage in the proc file system. I've observed that if i list the file descriptors of a certain process from proc ls -la /proc/1234/fd i get the following output: lr-x------ 1 root root 64 Sep 13 07:12 0 -> /dev/null l-wx------ 1 root root 64 Sep 13 07:12 1 -> /dev/null l-wx------ 1 root root 64 Sep 13 07:12 2 -> /dev/null lr-x------ 1 root root 64 Sep 13 07:12 3 -> pipe:[2744159739] l-wx------ 1 root root 64 Sep 13 07:12 4 -> pipe:[2744159739] lrwx------ 1 root root 64 Sep 13 07:12 5 -> socket:[2744160313] lrwx------ 1 root root 64 Sep 13 07:12 6 -> /var/lib/log/some.log I get the meaning of a file descriptor and i understand from my example the file descriptors 0 1 2 and 6, they are tied to physical resources on my computer, and also i guess 5 is connected to some resource on the network(because of the socket), but what i don't understand is the meaning of the numbers in the brackets. Do the point to some property of the resource? Also why are some of the links broken? And lastly as long as I asked a question already :) what is pipe?

    Read the article

  • Samba/Winbind issues joing to Active directory domain

    - by Frap
    I'm currently in the process of setting up winbind/samba and getting a few issues. I can test connectivity with wbinfo fine: [root@buildmirror ~]# wbinfo -u hostname username administrator guest krbtgt username [root@buildmirror ~]# wbinfo -a username%password plaintext password authentication succeeded challenge/response password authentication succeeded however when I do a getent I don't get any AD accounts returned [root@buildmirror ~]# getent passwd root:x:0:0:root:/root:/bin/bash bin:x:1:1:bin:/bin:/sbin/nologin daemon:x:2:2:daemon:/sbin:/sbin/nologin adm:x:3:4:adm:/var/adm:/sbin/nologin lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin sync:x:5:0:sync:/sbin:/bin/sync shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown halt:x:7:0:halt:/sbin:/sbin/halt mail:x:8:12:mail:/var/spool/mail:/sbin/nologin uucp:x:10:14:uucp:/var/spool/uucp:/sbin/nologin operator:x:11:0:operator:/root:/sbin/nologin puppet:x:52:52:Puppet:/var/lib/puppet:/sbin/nologin my nsswitch looks like this: passwd: files winbind shadow: files winbind group: files winbind #hosts: db files nisplus nis dns hosts: files dns and I'm definitely joined to the domain: [root@buildmirror ~]# net ads info LDAP server: 192.168.4.4 LDAP server name: pdc.domain.local Realm: domain.local Bind Path: dc=DOMAIN,dc=LOCAL LDAP port: 389 Server time: Sun, 05 Aug 2012 17:11:27 BST KDC server: 192.168.4.4 Server time offset: -1 So what am I missing?

    Read the article

  • Nginx , Apache , Mysql , Memcache with server 4G ram. How optimize to enought memory?

    - by TomSawyer
    i have 1 dedicated server with Nginx proxy for Apache. Memcache, mysql, 4G Ram. These day, my visitor on my site wasn't increased, but my server get overload always in some specified time. (9AM - 15PM) Ram in use is increased second by second to full. that's moment, my server will get overload. i have to kill all apache , mysql service and reboot it to get free memory. that's the circle. here is my ram in use at the moment 160(nginx) 220(apache) 512(memcache) 924(mysql) here's process number 4(nginx) 14(apache) 5(memcache) 20(mysql) and here's my my.cnf config. someone can help me to optimize it? [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql skip-locking skip-networking skip-name-resolve # enable log-slow-queries log-slow-queries = /var/log/mysql-slow-queries.log long_query_time=3 max_connections=200 wait_timeout=64 connect_timeout = 10 interactive_timeout = 25 thread_stack = 512K max_allowed_packet=16M table_cache=1500 read_buffer_size=4M join_buffer_size=4M sort_buffer_size=4M read_rnd_buffer_size = 4M max_heap_table_size=256M tmp_table_size=256M thread_cache=256 query_cache_type=1 query_cache_limit=4M query_cache_size=16M thread_concurrency=8 myisam_sort_buffer_size=128M # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 [mysqldump] quick max_allowed_packet=16M [mysql] no-auto-rehash [isamchk] key_buffer=256M sort_buffer=256M read_buffer=64M write_buffer=64M [myisamchk] key_buffer=256M sort_buffer=256M read_buffer=64M write_buffer=64M [mysqlhotcopy] interactive-timeout [mysql.server] user=mysql basedir=/var/lib [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid

    Read the article

  • Oracle 11g Data Guard over a WAN

    - by Dave LeJeune
    Hi - We are in process of looking at using Oracle's Data Guard to replicate our 11g instance from a colo facility in Washington DC to Chicago. To give some basics we have approximately 25TB of storage and a healthy transaction rate in the 1-2K/sec range. Also, because we are processing data in real-time we have a 24x7x365 requirement for processing data. We don't have any respites as far as volume except for system upgrades (once every few months) where we take the system offline but then course experience a spike in transactions when we bring the system back on-line. Ideally we would want the second instance in the DG configuration semi-online in a read-only fashion for reports/etc. We evaluated DG in 10g and were not overly impressed and research seemed to show that earlier versions had issues with replication over a WAN but I have heard good things about modifications the product has gone through w/ 11g. Can anyone confirm an instance of this size and transaction rate being replicated over a WAN and if so what is the general latency? An information or experiences with a DG implementation that is of this size and scope would really be helpful (or larger - I also realize we are still relatively small compared to many others out there). Many thanks in advance.

    Read the article

  • Hard drive causing BSOD

    - by JoshIrving
    I've come across a problem after building my new PC and installing a clean Windows 7. I originally planed on a RAID 1 or 0 but after further research I decided against it. So I was left with two 1TB Western Digital Black SATA 6Gb/s hard drives. My plan now was to use my second hard drive as a backup (using Windows Backup or 3rd party software). I set both hard drives to AHCI in the BIOS and installed Windows 7. I went through the lengthy process of downloading and installing each driver manually (latest versions), using the motherboard disk for a list of what I need. After a few restarts and before installing any software, I took an image backup onto DVD and the second hard drive. First witnessed the problem during the first scheduled Windows backup. The progress bar froze at about 70% (doc backup done, image backup in progress). It stayed still for 2 hours until it blue screened. Next time the backup froze, I tried shutting down. It logged me out and got stuck at the last step ("Shutting down" and blue spinner) for an hour, until I hard shutdown. I later realised this hasn't got anything to do with the backup. I ended up blue screening on almost every shut down (same place). Turns out, it's because of the second hard drive spinning down or turning off. The computer will now shutdown properly, as long as I remember to read or write to the second drive before executing shutdown. I've now set "Turn off hard disk after: Never" - No problems, so far. Do I have dodgy hard drive(s) or should I investigate the POWER_STATE_DRIVER_FAILURE BSOD - can it be a driver issue? AHCI?

    Read the article

  • Suggestion regarding pointing domains to a dedicated server.

    - by Bizz
    I recently got a dedicated server and I am still at a learning stage. So please bear with me. I wanted to have 3 domains pointed to my server, but initially I asked them the process to point me one and they responded with: Hello, I can set that rDNS for you. Please make sure the domain is pointed to our name servers at godaddy. They are: ns1.xxxxx.net ns2.xxxxx.net After this is done, please allow up to 24 hours for global propagation. Alternatively, we can host your DNS for you if you prefer. What is the domain you would like xx.xxx.xx.xx to resolve to? I then asked him to point one of my domains. They responded, Did you want us to host your dns for that domain or just an rDNS record? They also said, Hosting your DNS is a free service here. We can only do 1 domain per IP. IF you would like to purchase additional IPs, they are $1/IP per month. I personally dont want to host DNS myself. Neither mail server. I have a single IP so far. It will then start to get expensive if I want to host 25 from these guys. I am still in the trial period. Does this seem reasonable as far as pricing goes? If I want to have some one host DNS and mail server, this is getting super expensive. Email hosting from rackspace starts at $2 per mail address, but from then on, its the same if you want added features such as archiving etc; What would you suggest I do if I am on a shoe string budget but I also want to avoid hassle of doing it myself and I only have 3 domains so far and I would need few mail addresses for each of them.

    Read the article

  • sql server uninstallation issue

    - by angel
    I'm unable to remove SQL Server 2008 sp1 completely from my system. I'm using windows 7 ultimate. Everytime I try uninstalling it i get the following error. How can I remove it? here is the log: Overall summary: Final result: Failed: see details below Exit code (Decimal): -2068643839 Exit facility code: 1203 Exit error code: 1 Exit message: Failed: see details below Start time: 2013-06-24 21:10:38 End time: 2013-06-24 21:21:17 Requested action: Uninstall Log with failure: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20130624_210908\sql_rs_Cpu64_1.log Exception help link: http://go.microsoft.com/fwlink?LinkId=20476&ProdName=Microsoft+SQL+Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=10.0.1600.22 Machine Properties: Machine name: ABHI-PC Machine processor count: 4 OS version: Windows Vista OS service pack: Service Pack 1 OS region: United States OS language: English (United States) OS architecture: x64 Process architecture: 64 Bit OS clustered: No Product features discovered: Product Instance Instance ID Feature Language Edition Version Clustered Sql Server 2008 MSSQLSERVER MSRS10.MSSQLSERVER Reporting Services 1033 Enterprise Edition 10.0.1600.22 No Sql Server 2008 Management Tools - Basic 10.0.1600.22 No Package properties: Description: SQL Server Database Services 2008 SQLProductFamilyCode: {628F8F38-600E-493D-9946-F4178F20A8A9} ProductName: SQL2008 Type: RTM Version: 10 SPLevel: 0 Installation edition: ENTERPRISE User Input Settings: ACTION: Uninstall CONFIGURATIONFILE: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20130624_210908\ConfigurationFile.ini FEATURES: RS,SSMS,SNAC_SDK,CE_RUNTIME,CE_TOOLS,SNAC HELP: False INDICATEPROGRESS: False INSTANCEID: INSTANCENAME: MSSQLSERVER MEDIASOURCE: QUIET: False QUIETSIMPLE: False X86: False Configuration file: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20130624_210908\ConfigurationFile.ini Detailed results: Feature: SQL Client Connectivity Status: Skipped MSI status: Passed Configuration status: Passed Feature: SQL Client Connectivity SDK Status: Skipped MSI status: Passed Configuration status: Passed Feature: Reporting Services Status: Failed: see logs for details MSI status: Passed Configuration status: Failed: see details below Configuration error code: 0xFFD65603 Configuration error description: Input string was not in a correct format. Configuration log: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20130624_210908\Detail.txt Feature: SQL Compact Edition Tools Status: Passed MSI status: Passed Configuration status: Passed Feature: SQL Compact Edition Runtime Status: Skipped MSI status: Passed Configuration status: Passed Feature: Management Tools - Basic Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Rules with failures: Global rules: There are no scenario-specific rules. Rules report file: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20130624_210908\SystemConfigurationCheck_Report.htm

    Read the article

  • Hard drive causing BSOD

    - by JoshIrving
    I've come across a problem after building my new PC and installing a clean Windows 7. I originally planed on a RAID 1 or 0 but after further research I decided against it. So I was left with two 1TB Western Digital Black SATA 6Gb/s hard drives. My plan now was to use my second hard drive as a backup (using Windows Backup or 3rd party software). I set both hard drives to AHCI in the BIOS and installed Windows 7. I went through the lengthy process of downloading and installing each driver manually (latest versions), using the motherboard disk for a list of what I need. After a few restarts and before installing any software, I took an image backup onto DVD and the second hard drive. First witnessed the problem during the first scheduled Windows backup. The progress bar froze at about 70% (doc backup done, image backup in progress). It stayed still for 2 hours until it blue screened. Next time the backup froze, I tried shutting down. It logged me out and got stuck at the last step ("Shutting down" and blue spinner) for an hour, until I hard shutdown. I later realised this hasn't got anything to do with the backup. I ended up blue screening on almost every shut down (same place). Turns out, it's because of the second hard drive spinning down or turning off. The computer will now shutdown properly, as long as I remember to read or write to the second drive before executing shutdown. I've now set "Turn off hard disk after: Never" - No problems, so far. Do I have dodgy hard drive(s) or should I investigate the POWER_STATE_DRIVER_FAILURE BSOD - can it be a driver issue? AHCI?

    Read the article

  • Updating files with a Perforce trigger before submit [migrated]

    - by phantom-99w
    I understand that this question has, in essence, already been asked, but that question did not have an unequivocal answer, so please bear with me. Background: In my company, we use Perforce submission numbers as part of our versioning. Regardless of whether this is a correct method or not, that is how things are. Currently, many developers do separate submissions for code and documentation: first the code and then the documentation to update the client-facing docs with what the new version numbers should be. I would like to streamline this process. My thoughts are as follows: create a Perforce trigger (which runs on the server side) which scans the submitted documentation files (such as .txt) for a unique term (such as #####PERFORCE##CHANGELIST##NUMBER###ROFL###LOL###WHATEVER#####) and then replaces it with the value of what the change list would be when submitted. I already know how to determine this value. What I cannot figure out, is how or where to update the files. I have already determined that using the change-content trigger (whether possible or not), which "fire[s] after changelist creation and file transfer, but prior to committing the submit to the database", is the way to go. At this point the files need to exist somewhere on the server. How do I determine the (temporary?) location of these files from within, say, a Python script so that I can update or sed to replace the placeholder value with the intended value? The online documentation for Perforce which I have found so far have not been very explicit on whether this is possible or how the mechanics of a submission at this stage would work.

    Read the article

  • NGINX AEGIR DRUPAL permissions 403 forbidden

    - by nlam
    New to nginx Installed on mac os for use with aegir & drupal It's running great, but I have a problem with permissions My hostmaster installation is here : /var/aegir/hostmaster-6.x-1.7/ The hostmaster settings file here : /var/aegir/hostmaster-6.x-1.7/sites/aegir.ldev/settings.php Permissions for settings.php are set to 440 automatically by hostmaster, but I'm getting a 403 forbidden page because of this. If I give read permission to "other" the site works great (444 or even 004). Drupal is also telling me that the file system paths are not writable (sites/aegir.ldev/files & sites/aegir.ldev/private). I would have to change the permissions there too. Moreover, I would also have to change permissions for every site installed by hostmaster. Anyway. In my nginx.conf I have the following : user "myuser" _www; Owner and group for settings.php, /sites/example.ldev/files, /sites/example.ldev/private are "myuser" and "_www". Changing permissions to 004 solves this problem, but really confuses me. Why do "other" have permission and not owner or group? I've checked the processes running in activity monitor. Nginx is running as "myuser". Except for one process running as root. So I'm stumped. Hope someone can help.

    Read the article

  • How to track things that SHOULD happen, but might not have

    - by Kamiel Wanrooij
    I am running into a couple of issues with some applications we've deployed and maintain. I have the feeling we have approached this with some anti-patterns up to now, but I would like to see how to make this more flexible and stable. In one situation, we have a server at a client which pushes data to us to parse every night (yes, Windows Task Scheduler). This is highly unstable however, so once every month this doesn't happen because of reasons out of our control. This heavily impacts our business since we run with stale data in that situation. In another scenario we have a lot of background job processes that should be running. We already keep them up using bluepill ( http://www.github.com/arya/bluepill ) but obviously restarts happen, both automatically and manually, and people forget things or systems mess up. What I would like to track is events that should occur or should be available. Like the existence of a process, the execution of a program, or the creation/age of a file, and track it when they don't happen or exist. We develop most things in Ruby on Rails, use NewRelic, Bluepill and Munin, and run on Ubuntu. I've been toying around with counting ps aux | grep processname | wc -l in Munin scripts, or capturing the age of a file and raising alerts over 24-26 hours, stuff like that. Is there better tooling to track things that should happen, and raise alerts if they don't? P.S. I know some things are suboptimal, like manually having to define bluepill for applications and then forgetting to do so. The same goes for the push based approach of the first application, a dedicated daemon that manages that on the client side that we control and can track its connection to us might be a much better solution.

    Read the article

  • MySQL : table organisation for very large sets with high update frequency

    - by Remiz
    I'm facing a dilemma in the choice of my MySQL schema application. So before I start here is a picture extremely simplified of my database : Schema here : http://i43.tinypic.com/2wp5lxz.png In one sentence : for each customer, the application harvest text data and attached tags to each data collected. As approximation of the usage of each table, here is what I expect : customer : ~5000, shouldn't grow fast data : 5 millions per customer, could double or triple for big customers. tag : ~1000, quite fixed size data_tag : hundred of millions per customer easily. Each data can be tagged a lot. The harvesting process is permanent, that means that around every 15 minutes new data come and are tagged, that require a very constant index refreshing. A lot of my queries are a SELECT COUNT of DATA between specific DATES and tagged with a specific TAG on a specific CUSTOMER (very rarely it will involve several customers). Here is the situation, you can imagine with this kind of volume of data I'm facing a challenge in term of data organization and indexing. Again, it's a very minimalistic and simplified version of my structure. My question is, is it better: to stick with this model and to manage crazy index optimization ? (which involves potentially having billions of rows in the data_tag table) change the schema and use one data table and one data_tag table per customer ? (which involves having 5000 tables on my database) I'm running all of this on a MySQL 5.0 dedicated server (quad-core, 8Go of ram) replicated. I only use InnoDB, I also have another server that run Sphinx. So knowing all of this, I can't wait to hear your opinion about this. Thanks.

    Read the article

  • Web service for checking out / leasing a token

    - by JP Slavinsky
    I run a web site on AWS that has a number of web servers (say 4 of them) running behind a load balancer. For this particular web site, I have one license key of New Relic for doing instrumentation. At any one time, I only want one of the 4 web servers to be using the key. If that server goes offline, I want one of the remaining web servers to be able to begin using the license key. Does anyone know of a service that would let me manage this process? The service would not particularly need to store the key itself but rather just manage the fact that only one web server can lease out the right to use the key at any time. Something where the web servers would have to come back every few minutes and renew their lease, and if they don't it becomes available to someone else. I just realized I could maybe accomplish a hacked version of this using a file on S3, but that doesn't prevent race conditions / etc and is definitely hackish. Any thoughts welcome. FWIW, this site is built on Ruby on Rails. Thanks! JP

    Read the article

  • Understanding ulimit -u

    - by tripleee
    I'd like to understand what's going on here. linvx$ ( ulimit -u 123; /bin/echo nst ) nst linvx$ ( ulimit -u 122; /bin/echo nst ) -bash: fork: Resource temporarily unavailable Terminated linvx$ ( ulimit -u 123; /bin/echo one; /bin/echo two; /bin/echo three ) one two three linvx$ ( ulimit -u 123; /bin/echo one & /bin/echo two & /bin/echo three ) -bash: fork: Resource temporarily unavailable Terminated one I speculate that the first 122 processes are consumed by Bash itself, and that the remaining ulimit governs how many concurrent processes I am allowed to have. The documentation is not very clear on this. Am I missing something? More importantly, for a real-world deployment, how can I know what sort of ulimit is realistic? It's a long-running daemon which spawns worker threads on demand, and reaps them when the load decreases. I've had it spin the server to its death a few times. The most important limit is probably memory, which I have now limited to 200M per process, but I'd like to figure out how I can enforce a limit on the number of children (the program does allow me to configure a maximum, but how do I know there are no bugs in that part of the code?)

    Read the article

  • Can I have a single solid state drive and a RAID array on the same machine? [closed]

    - by jaminto
    Hi- To summarize, i'm looking to use a single solid state drive as my primary drive, and two conventional sata drives in a RAID 1 configuration for data. I am trying to install 64-bit Windows 7 onto this configuration. Is this possible? Here are the details: I built a desktop that has been running 64-bit Vista on two 500Gb in a RAID 1 array for a few years. I just purchased an Intel X25-M 80Gb Sata Solid-State Drive, and was planning on using this a my primary drive, and keeping the RAID 1 array as my data drive. I added the SSD drive and in the RAID setup, configured it as a RAID 0 array of only one disk. Then, I tried to do a clean install of windows 7 64-bit, but got stuck in the "Missing driver for CD/DVD drive" black hole of selecting driver files and Windows telling me that i don't have the appropriate driver for my hardware. The missing hardware is NOT a CD/DVD drive, since i'm installing off of my only CD/DVD drive. Plus at one point i was able to point it at a driver for my raid controller, and then my hard drives magically showed up as browsable sources for finding drivers for some other unnamed device that setup couldn't recognize. After a few hours of trying drivers (this was a very slow process) i decided to reboot and look at the BIOS settings. I'm using an ASUS M2A-VM motherboard which has an ATI SB600 RAID controller on board. I switched the "On board SATA Type" setting from "SATA" to "AHCI" thinking that since AHCI is an Intel thing, this would help. Unfortunately, this abandoned my RAID configuration, and my previously mirrored drives are showing up as separate drives when i boot into my current windows installation. Am i trying to do the impossible here? Should i just buy a separate SATA/RAID PCI card and plug the SSD into that? Any help would be greatly appreciated.

    Read the article

  • Plesk Uninstall Memory issue

    - by user115079
    I am trying to uninstall plesk from my VPS by running following command: yum remove sw-* psa-* plesk-* when i run this command i get following error: Running rpm_check_debug Running Transaction Test memory alloc (4 bytes) returned NULL. First time when i run above command, this mem alloc (4 bytes) was very big number like (67864987). then i googled it, got some clear/ulimit commands. executed them. rebooted my system. stopped all process and executed this command again. but still getting 4 byte issue. dont know how to get rid of it. I also tried ulimit after reboot but no success and Yes. No swap attached. these are stats of my system [root@vps ~]# free -m total used free shared buffers cached Mem: 384 67 316 0 0 0 -/+ buffers/cache: 67 316 Swap: 0 0 0 top - 21:01:07 up 3:12, 1 user, load average: 0.24, 0.08, 0.03 Tasks: 31 total, 2 running, 29 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 393216k total, 69832k used, 323384k free, 0k buffers Swap: 0k total, 0k used, 0k free, 0k cached is there any other alternative to achieve my goal to uninstall plesk? thanks.

    Read the article

  • Is there any functional-like unix shell?

    - by Caruccio
    I'm (really) newbie to functional programming (in fact only had contact with it using python) but seems to be a good approach for some list-intensive tasks in a shell environment. I'd love to do something like this: $ [ git clone $host/$repo for repo in repo1 repo2 repo3 ] Is there any Unix shell with these kind of feature? Or maybe some feature to allow easy shell access (commands, env/vars, readline, etc...) from within python (the idea is to use python's interactive interpreter as a replacement to bash). EDIT: Maybe a comparative example would clarify. Let's say I have a list composed of dir/file: $ FILES=( build/project.rpm build/project.src.rpm ) And I want to do a really simple task: copy all files to dist/ AND install it in the system (it's part of a build process): Using bash: $ cp ${files[*]} dist/ $ cd dist && rpm -Uvh $(for f in ${files[*]}; do basename $f; done)) Using a "pythonic shell" approach (caution: this is imaginary code): $ cp [ os.path.join('dist', os.path.basename(file)) for file in FILES ] 'dist' Can you see the difference ? THAT is what i'm talking about. How can not exits a shell with these kind of stuff build-in yet? It's a real pain to handle lists in shell, even its being a so common task: list of files, list of PIDs, list of everything. And a really, really, important point: using syntax/tools/features everybody already knows: sh and python. IPython seams to be on a good direction, but it's bloated: if var name starts with '$', it does this, if '$$' it does that. It's syntax is not "natural", so many rules and "workarounds" ([ ln.upper() for ln in !ls ] -- syntax error)

    Read the article

  • Windows 7 explorer crashing trying to read external hard disk

    - by Mario De Schaepmeester
    I have a 1TB Western Digital hard drive which is almost full and last time I tried to plug it into my laptop, I got a Windows dialog saying "this hard drive needs to be formatted". I did not panic because I have experienced things like this before and I know it's often solved by simply re-inserting the drive. Now however, whenever I plug it in and try to browse it in explorer by going to "computer", the explorer process crashes after a while. I simply close explorer since it takes ages trying to read the disk and nothing happens. After searching on the internet, the best thing to do would be a chkdsk. I tried it via properties in explorer (which also took a good 5 minutes to open up), locks up as well, after waiting a couple of minutes it says there's no access to the disk so a chkdsk is not possible... I want to make clear that I always use safe removal before pulling out the USB cable. Last time however, safe removal just would not work and when trying to shut down Windows, the logoff screen just would not disappear (I've waited at least 10 minutes or so) and I powered off the PC by force. This may be the cause of the problems but the disk was still recognised immediately after that. I really don't want to format this thing because it contains C: clones of 3 computers and a lot of other stuff that I don't want to re-copy. What would be the best course of action? Update I got chkdsk working via the command line. I used the /F and /R options. I already got a bunch of lines saying "file record segment X is unreadable" or whatever it is in English, my OS is Dutch. It looks bad... Will chdsk repair these errors?

    Read the article

  • Truecrypt files corrupted after moving PC into another case

    - by Dygerati
    I recently bought a new PC case and transferred all of my PC hardware into it. The only hardware modification was the addition of two identical ram modules. The entire process went smoothly, and everything worked and booted as before. The only side-effect I found when accessing one my of file-based hidden truecrypt volumes shortly there after. Some of the files in the volume - NOT all - seemed to be entirely corrupted. The directory and file names are garbled characters, but a few of the directories in the same volume appear and function normally. Also, all files in the non-hidden tc volume were still intact. Is this not weird? The only other real change I could think of would be that the hard drives were connected to different SATA ports on the mobo. I really don't know how the truecrypt encryption works well enough to know what could cause this...and the fact that not all the files were corrupted makes it more bizarre still. So, first off (and I'm not too hopeful on this point), would it be possible to restore these files? I had a backup of most, but not all of the files involved. Other than that I'm just curious how this happened and how I can prevent it next time. Thanks!

    Read the article

  • Removing expired self-signed certificate in IE9 (created with IIS7.5)

    - by Itison
    Over 1 year ago, I created a self-signed certificate in IIS 7.5 and exported it. I then installed it for IE9 (it may have been IE8 at the time), which worked fine until a year later when the certificate expired. I have put this off, but today I created a new self-signed certificate in IIS, exported it, and attempted to install it in IE9. The problem is that for whatever reason, IE cannot seem to forget about the old, expired certificate. Here's what I tried initially: Accessed my ASP.NET application and see the Certificate error. Clicked "View certificates". Clicked "Install Certificate" and then Next/Next/Finish. At this point, it says the import is successful, but it still only shows the expired certificate. I've tried simply double-clicking on the exported certificate on my desktop. Initially I chose to automatically select the certificate store, but then I tried it again and manually selected "Trusted Root Certification Authorities". I've also tried dragging/dropping the certificate over an IE window and clicking "Open". The process is then exactly the same as it is if I had double-clicked on the certificate, but I had hoped that this would somehow specifically tell IE to use this certificate. I tried opening MMC and with the Certificate snap-in, confirmed that the new certificate was added under "Trusted Root Certification Authorities". It was also under my "Personal" certificates (I guess this is where it goes by default). Nothing worked, so I went through every folder in MMC and deleted the expired certificate. I also deleted the expired certificate in IIS. Nothing has worked. Any ideas? I see no clear resolution and I can't seem to find any posts related to this issue.

    Read the article

  • Missing Memory on Windows Server 2008

    - by Chris Lively
    I have a windows 2008 x64 server with 8GB of RAM installed. Task Manager and Resource Monitor both insist that 7.5GB of the RAM is in use. However, the memory list under Processes (Memory Private Bytes) doesn't add up. I do have Show Processes from all users checked and hand adding the numbers I come up with about 3.5GB of RAM. I also looked at the latest copy of SysInternals Process Explorer. And neither the Private Bytes or Working Set adds up to more than about 3.5GB of RAM in use. What's going on? ===== Update: I bounced the server to see what would happen with the memory utilization. After boot and regular operations began it sat at 3GB of RAM usage. 18 hours later, it's back up to 6.8GB of usage with no indication as to where the additional 3.5GB or so of RAM is being used. Here are links to screen shots of the resource monitor and task manager: Resource Monitor Task Manager Update 2: Well, I believe I located the problem. When I detached one of the larger databases from my sql server the amount of ram shown as "in use" dropped drastically. The Memory Private Bytes count barely moved. So I'm guessing that SQL server has some way of allocating memory where it doesn't really show up in any of the monitors. I went further and created a new database file, then transferred all of the data from the one I detached. Even though it has the same data, and the same transactions going through it, the memory in use has stayed low. Maybe there was some corruption in the DB? I'll leave it to the DB gods and go searching for another "problem" ;)

    Read the article

  • Despeckle line art

    - by Dour High Arch
    We have a number of line-art charts unfortunately saved as JPEGs. They are now riddled with distracting compression artifacts or "speckles". Is there any way of removing these? I do not have the original files and it will be very difficult to recreate them. I am running Windows 7 and tried Paint.Net; none of the filters help. Posterize washed out all the colors and leaves the speckles. Blur makes text unreadable. Noise Reduction wrecks antialiasing of curved lines, and perversely enhances the speckles, making them look like checkerboards. Yes, I have Googled for software to do this; there are many programs that advertise despeckling but, after my experience with Paint.Net, do not want to experiment with applications that show no before and after images. The only example I have seen that does what I want is from a Photoshop tutorial. I have dozens of files and the tutorial requires considerable manual fine-tuning. I would prefer to automate or batch-process this task. Commercial apps are fine, but I do not want to spend over $600 and learning a complex program for a single task.

    Read the article

  • can't install anything anymore with apt-get

    - by Aymane Shuichi
    Welcome this is the log I have when trying to install anything (php5-fpm after removing it) apt-get install php5-fpm Reading package lists... Done Building dependency tree Reading state information... Done php5-fpm is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up php5-fpm (5.4.4-14+deb7u10) ... insserv: warning: script 'S55IptabLes' missing LSB tags and overrides insserv: warning: script 'S55IptabLex' missing LSB tags and overrides insserv: There is a loop between service IptabLes and mountnfs if started insserv: loop involving service mountnfs at depth 8 insserv: loop involving service networking at depth 7 insserv: loop involving service mountnfs-bootclean at depth 10 insserv: There is a loop between service rc.local and mountall if started insserv: loop involving service mountall at depth 6 insserv: loop involving service checkfs at depth 5 insserv: loop involving service kbd at depth 11 insserv: There is a loop between service rc.local and mountall-bootclean if started insserv: loop involving service mountall-bootclean at depth 7 insserv: loop involving service urandom at depth 9 insserv: There is a loop between service IptabLes and mountdevsubfs if started insserv: loop involving service mountdevsubfs at depth 2 insserv: loop involving service udev at depth 1 insserv: There is a loop at service rc.local if started insserv: There is a loop at service IptabLes if started insserv: Starting IptabLes depends on rc.local and therefore on system facility `$all' which can not be true! (x99 times repeated ) insserv: Max recursions depth 99 reached insserv: loop involving service postfix at depth 2 insserv: There is a loop between service IptabLes and udev if started insserv: loop involving service mountkernfs at depth 1 insserv: loop involving service IptabLes at depth 1 Now here is the error i get insserv: exiting now without changing boot order! update-rc.d: error: insserv rejected the script header dpkg: error processing php5-fpm (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: php5-fpm E: Sub-process /usr/bin/dpkg returned an error code (1) The biggest operation I held before this was updating nginx from 1.2 to 1.6 and it was thanks to this site : here is the link : How to upgrade nginx from 1.2 to 1.6 on debian 7 Please help !

    Read the article

  • Searching SharePoint site with Windows Explorer

    - by alexsome
    Every week, I manually backup recent versions of the files on my group's SharePoint site. I open the library in Windows Explorer, search for all files modified in the past week, then copy and paste them to a network location. We need this process because our SharePoint site has a quota that we would easily meet if we had unlimited versions, so we keep a history of older versions on the network. Recently I got an upgrade to my work computer and I am unable to search the site using Windows Explorer. When I run the search for files modified in the last week no results are returned. If I run a search with no criteria on the file library, all the files are found but the "modified on" field is blank. So the search results only have the file and type fields. The new computer has Windows XP, just like the old one did. I hope this makes sense. Does anyone have any clue what the problem could be? I'd be happy to provide more info if necessary. It's bugging me to no end and I'm not even sure where to begin looking - it's either a trivial issue or a very obscure one. Thanks a lot.

    Read the article

  • What steps should I take to secure Tomcat 6.x?

    - by PAS
    I am in the process of setting up an new Tomcat deployment, and want it to be as secure as possible. I have created a 'jakarta' user and have jsvc running Tomcat as a daemon. Any tips on directory permissions and such to limit access to Tomcat's files? I know I will need to remove the default webapps - docs, examples, etc... are there any best practices I should be using here? What about all the config XML files? Any tips there? Is it worth enabling the Security manager so that webapps run in a sandbox? Has anyone had experience setting this up? I have seen examples of people running two instances of Tomcat behind Apache. It seems this can be done using mod_jk or with mod_proxy... any pros/cons of either? Is it worth the trouble? In case it matters, the OS is Debian lenny. I am not using apt-get because lenny only offers tomcat 5.5 and we require 6.x. Thanks!

    Read the article

< Previous Page | 725 726 727 728 729 730 731 732 733 734 735 736  | Next Page >