Search Results

Search found 13889 results on 556 pages for 'results'.

Page 402/556 | < Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >

  • Textmate add multiline text at end of line

    - by Yuval
    In Textmate, I am able to add text to several lines at once by clicking and holding the Option key and dragging with the mouse. say I have the following lines: foo 1: foo 2: foo 3: I can easily click and hold option and then drag down with the lines to select the text at the end of each line, and then type "bar" once and it will be added to all lines, as such: foo 1: bar foo 2: bar foo 3: bar Fantastic. The problem I run into, is when my lines aren't the same length, as such foo 19: foo 37842342346: foo 503: Now if I want to add text to the end of each line, I have to either do it manually, or choose enough space so that the longest line is not overwritten, as such: foo 19: bar foo 37842342346: bar foo 503: bar This results in a lot of unwanted whitespace in lines that don't need it. Granted, I could easily run a regular expression search to replace all multiple occurrences of a space with a single one, but I was wondering if there's a way to select all ending of lines at once without having to do that. Any idea? Thanks!

    Read the article

  • Hardware issue: computer plays beep sounds at startup, and then turns off

    - by Darkkurama
    Turns out that after moving, my computer got damaged somehow. I installed my computer 2 days ago. I didn't check the state of the hardware parts and turned the pc on recklessly, not realizing that the cpu fan was dettached from the cpu. After this I reattached the fan. The computer worked kind of faulty after that, because at the time of startup, it would play a long beep (different from the normal beep) and won't display anything. I tried restarting after this incident and it got fixed (don't know how, but it did), and I was even able to play some videogames for some hours (more than 5). This morning I tried to get this issue sorted out by checking again the cpu and in general, the connections, and for my surprise, at the time of turning it on again, the computer wouldn't boot at all. I reapplied some thermal paste to the cpu and fan, but got no results (I did this because the old paste was very badly distributted along the surface of both cpu and fan). Now every time I turn it on, the computer acts randomly: sometimes at startup, it plays a long continous beep and then turns off. one time it played a long continous beep and got to the windows password input screen, but then it turned off More frequently, I turn it on and after some seconds, it turns off without any sound. You can check this video I recorded, which features how the computer behaves most of the time Another one I tried troubleshoting it myself by disconnecting my graphic card, rams, HDD and even the cpu and its fan with no result. Computer specs: Intel Core 2 Quad Q6600 processor, Nvidia xfx gtx 260 black edition, 4 GB ram, 500 GB hard drive, Windows 7 64 bit

    Read the article

  • Problem installing SQLite3 RubyGem on Ubuntu

    - by misbehavens
    I am having a problem trying to install the SQLite3 RubyGem. Here's what I'm doing: $ sudo gem install --remote sqlite3-ruby Here's the output: Building native extensions. This could take a while... ERROR: Error installing sqlite3-ruby: ERROR: Failed to build gem native extension. /usr/bin/ruby1.8 extconf.rb checking for fdatasync() in -lrt... yes checking for sqlite3.h... no *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby1.8 --with-sqlite3-dir --without-sqlite3-dir --with-sqlite3-include --without-sqlite3-include=${sqlite3-dir}/include --with-sqlite3-lib --without-sqlite3-lib=${sqlite3-dir}/lib --with-rtlib --without-rtlib Gem files will remain installed in /usr/lib/ruby/gems/1.8/gems/sqlite3-ruby-1.2.5 for inspection. Results logged to /usr/lib/ruby/gems/1.8/gems/sqlite3-ruby-1.2.5/ext/sqlite3_api/gem_make.out

    Read the article

  • Performance: Nginx SSL slowness or just SSL slowness in general?

    - by Mauvis Ledford
    I have an Amazon Web Services setup with an Apache instance behind Nginx with Nginx handling SSL and serving everything but the .php pages. In my ApacheBench tests I'm seeing this for my most expensive API call (which cache via Memcached): 100 concurrent calls to API call (http): 115ms (median) 260ms (max) 100 concurrent calls to API call (https): 6.1s (median) 11.9s (max) I've done a bit of research, disabled the most expensive SSL ciphers and enabled SSL caching (I know it doesn't help in this particular test.) Can you tell me why my SSL is taking so long? I've set up a massive EC2 server with 8CPUs and even applying consistent load to it only brings it up to 50% total CPU. I have 8 Nginx workers set and a bunch of Apache. Currently this whole setup is on one EC2 box but I plan to split it up and load balance it. There have been a few questions on this topic but none of those answers (disable expensive ciphers, cache ssl, seem to do anything.) Sample results below: $ ab -k -n 100 -c 100 https://URL This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking URL.com (be patient).....done Server Software: nginx/1.0.15 Server Hostname: URL.com Server Port: 443 SSL/TLS Protocol: TLSv1/SSLv3,AES256-SHA,2048,256 Document Path: /PATH Document Length: 73142 bytes Concurrency Level: 100 Time taken for tests: 12.204 seconds Complete requests: 100 Failed requests: 0 Write errors: 0 Keep-Alive requests: 0 Total transferred: 7351097 bytes HTML transferred: 7314200 bytes Requests per second: 8.19 [#/sec] (mean) Time per request: 12203.589 [ms] (mean) Time per request: 122.036 [ms] (mean, across all concurrent requests) Transfer rate: 588.25 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 65 168 64.1 162 268 Processing: 385 6096 3438.6 6199 11928 Waiting: 379 6091 3438.5 6194 11923 Total: 449 6264 3476.4 6323 12196 Percentage of the requests served within a certain time (ms) 50% 6323 66% 8244 75% 9321 80% 9919 90% 11119 95% 11720 98% 12076 99% 12196 100% 12196 (longest request)

    Read the article

  • Hotmail marking messages as junk

    - by Canadaka
    I was having problems with emails sent from my server being blocked completely by Hotmail, but I found out Hotmail had blocked my IP and by contacting Hotmail I had the block removed. See this question for more info: Email sent from server with rDNS & SPF being blocked by Hotmail But now all emails from my server are going directly to recipients "Junk" folder on hotmail and I can't figure out why. Hotmail says "Microsoft SmartScreen marked this message as junk and we'll delete it after ten days." I tried contacting the same people at Hotmail who had my IP block removed, but I haven't received any reply and its been almost a week. Here are some details: I have a valid SPF record for my domain "v=spf1 a include:_spf.google.com ~all" I have reverse DNS setup I have a Sender Score of 100 https://www.senderscore.org/lookup.php?lookup=66.199.162.177&ipLookup.x=55&ipLookup.y=14 I have signed up for Microsoft's SNDS and was approved. My ip says "All of the specified IPs have normal status." Microsoft added my IP to the JMRP Database My IP is not on any credible spam lists http://www.anti-abuse.org/multi-rbl-check-results/?host=66.199.162.177 my FROM header is being sent in proper format "From: CKA <[email protected]>" Here is a test email source:

    Read the article

  • Overriding some DNS entries in BIND for internal networks

    - by Remy Blank
    I have an internal network with a DNS server running BIND, connected to the internet through a single gateway. My domain "example.com" is managed by an external DNS provider. Some of the entries in that domain, say "host1.example.com" and "host2.example.com", as well as the top-level entry "example.com", point to the public IP address of the gateway. I would like hosts located on the internal network to resolve "host1.example.com", "host2.example.com" and "example.com" to internal IP addresses instead of that of the gateway. Other hosts like "otherhost.example.com" should still be resolved by the external DNS provider. I have succeeded in doing that for the host1 and host2 entries, by defining two single-entry zones in BIND for "host1.example.com" and "host2.example.com". However, if I add a zone for "example.com", all queries for that domain are resolved by my local DNS server, and e.g. querying "otherhost.example.com" results in an error. Is it possible to configure BIND to override only some entries of a domain, and to resolve the rest recursively?

    Read the article

  • Process does ICMP port scan on my OSX box and I am afraid my Mac got a virus

    - by Jamgold
    I noticed that my 10.6.6 box has some process sending out ICMP messages to "random" hosts, which concerns me a lot. when doing a tcpdump icmp I see a lot of the following 15:41:14.738328 IP macpro > bzq-109-66-184-49.red.bezeqint.net: ICMP macpro udp port websm unreachable, length 36 15:41:15.110381 IP macpro > 99-110-211-191.lightspeed.sntcca.sbcglobal.net: ICMP macpro udp port 54045 unreachable, length 36 15:41:23.458831 IP macpro > 188.122.242.115: ICMP macpro udp port websm unreachable, length 36 15:41:23.638731 IP macpro > 61.85-200-21.bkkb.no: ICMP macpro udp port websm unreachable, length 36 15:41:27.329981 IP macpro > c-98-234-88-192.hsd1.ca.comcast.net: ICMP macpro udp port 54045 unreachable, length 36 15:41:29.349586 IP macpro > c-98-234-88-192.hsd1.ca.comcast.net: ICMP macpro udp port 54045 unreachable, length 36 I got suspicious when my router notified me about a lot of ICMP messages that don't get a response [INFO] Mon Jan 10 16:31:47 2011 Blocked outgoing ICMP packet (ICMP type 3) from 192.168.1.189 to 212.25.57.90 Does anyone know how to trace which process (or worse kernel module) might be responsible for this? I rebooted and logged in with a virgin user account and tcpdump showed the same results. Any dtrace magic welcome.

    Read the article

  • RHEL 5.2 installing on ProLiant BL460c - hangs at 'now booting the kernel'

    - by Dr Rocket Mr Socket
    As the title states, I have a problem. This server and the installation disc are also on the other side of the world to me so... So far, I have tried to start the install with the parameters: linux text noapic noacpi no=apic no=acpi which results in the same hang. I have also disabled a PCI ethernet adapter, I am uneasy about disabling the onboard ethernet adapter I do not know if ILO uses this. Anyone have any advice? Much appreciated. EDIT: full output after trying to begin the installation. boot: linux text Loading initrd.img.................. Loading vmlinuz....... Uncompressing Linux...done. Now booting the kernel stays on this for hours EDIT2: adding the 'mem=40960M' (server has 40 gigs of ram) parameter allows it to proceed but the following output directly after 'Now booting the kernal' Memory: sized by int13 0e801h initrd extends beyond end of memory (0x00ef2090 > 0x00000000) disabling initrd Console: 16 point font, 400 scans Console: colour VGA+ 80x25, 1 virtual console (max 63) pcibios_init : BIOS32 Service Directory Structure at 0x000ffee0 pcibios_init : BIOS32 Service Directory entry at 0xf0000

    Read the article

  • What can cause Powershell execution policy not to be taken into account?

    - by Stephane
    We have in our infrastructure a number of powershell scripts used for various tasks ranging from user login to support technician simulating a user context. These scripts are centralized on our file server (through DFS) for easier management. Some of them are run at logon, some are run through published Citrix applications. We have applied a policy for the whole domain and all users that sets the Powershell execution policy to "unrestricted" so that the scripts can run from the file server. This works perfectly fine for logon script (at least, so far) but for scripts that are run later (usually through a published application but the same applies when using terminal services and a full desktop), the results are inconsistent: some users can run the script fine, some are always prompted in the powershell console for letting the scripts run. I cannot find anything that could cause this behavior and it's really inconsistent: if I start powershell manually and runs get-executionpolicy, I am told that the current policy is unrestricted. Yet, if from the same session I try to run a script through a program that calls powershell <script file name> <parameters> I get prompted before the script can run. What could cause such behavior ?

    Read the article

  • VPN - Accessing computer outside of network. Only works one way

    - by Dan
    I could use some help here. My ideal goal is to create a VPN for 2 macs that are in different locations so that they can share each others screens and share files. I basically want to do what Logmein's Hamachi does, but without the 5 user limitation. I have set up the VPN on my Synology NAS at my house using the PPTP protocol. I could also use OpenVPN. The good news is that I can use a laptop outside of my home network to access any computer on my network at my house. The bad news is that I can not do the reverse. I want to use a computer in my home network (same network as the VPN server) to access a computer outside of my network (which is connected via VPN successfully). My internal IP is 192.168.1.xxx PPTP VPN assigns my laptop that is outside of my network with 192.168.5.xxx, but when I try to access it remotely either with afp://192.168.5.xxx or vnc://192.168.5.xxx I can't connect using either. Is this something that I should be able to do or is VPN only one way? I've also tried openvpn with the same results. Thanks for any help! -Dan

    Read the article

  • How can I mount dd image of a partition?

    - by Puneet Arora
    I created a dd image of a partition (containing an HFS+ FS) of one of my disks (and not the entire disk) a few days ago using the following command - dd conv=sync,noerror bs=8k if=/dev/sdc2 of=/path/to/img How can I mount it? I tried the following but it doesn't work - mount -o loop,ro -t hfsplus /path/to/img /path/to/mntDir It gives me mount: wrong fs type, bad option, bad superblock on /dev/loop1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so and dmesg | tail gives me - [5248455.568479] hfs: invalid secondary volume header [5248455.568494] hfs: unable to find HFS+ superblock [5248462.674836] hfs: invalid secondary volume header [5248462.674843] hfs: unable to find HFS+ superblock [5248550.672105] hfs: invalid secondary volume header [5248550.672115] hfs: unable to find HFS+ superblock [5248993.612026] hfs: unable to find HFS+ superblock [5248998.103385] hfs: unable to find HFS+ superblock [5249031.441359] hfs: unable to find HFS+ superblock [5249036.274864] hfs: unable to find HFS+ superblock Is there something wrong that I am doing? I tried searching on how to do this but all the results I get only talk about mounting a partition from within a full disk image, using the offset option with mount - none talk about the case where the image itself is that of a partition. Thanks. PS: I'm running 64bit Arch Linux, and the partition from the original disk /dev/sdc2 mounts fine.

    Read the article

  • Blue screen error code 1000008e

    - by Kas
    I'm getting blue screens, mostly when trying to boot a program that required a lot of memory (games, photo editing software.) So far I've only managed to catch one set of error codes: BCCode: 1000008e BCP1: C0000005 BCP2: ADA393BA BCP3: E9BCEBC4 BCP4: 00000000 OS Version: 6_0_6002 Service Pack: 2_0 Product: 768_1 It's on a Sony VAIO Laptop VGN FW-41E, Vista OS service pack 2. Besides these codes it lists two 'temporary' files that were related with this crash: ...AppData\Local\Temp\WER-134925-0.sysdata.xml ...AppData\Local\Temp\WERDA66.tmp.version.txt When I googled these files some site said it was linked to a worm called 'yodo', but virus scans don't return any results (hitman pro, malware bytes, avast antivirus all turn up empty). Upon further searching about this yodo worm, I came across security stronghold where someone posted they had acquired this worm when downloading access and excel templates. Now, I actually did download templates for the same programs, they might have been the same, they may be related or I might be grasping at straws here. I have not noticed any issues other in performance as of late, just BSOD's when I start software that requires some memory, but I never had issues with these exact same programs before. Help and/or hints are required on how to actually figure out what's the root of this BSOD issue and how can I fix it. Do you reckon it's actually a virus? What program should be able to remove YODO worm stuff?

    Read the article

  • Video desktop recording and multiple WM displays, capturing nonactive display

    - by okobaka
    Two WM running on one local machine. WM - Fluxbox. Using ffmpeg to record desktop. ffmpeg -an -f x11grab -s 1920x1080 -r 25 -i :1.0 -sameq /tmp/video.mkv On one display everything works great, but not when i have another WM display startx -- :1. What i am doing right now is to switch ctrl+alt+f8 to display:1.0, and start recording with ffmpeg. Everything is fine until i switch back ctrl+alt+f7 to display:0.0, WM and captured video image freezes, but when i switch back ctrl+alt+f8 to display:1.0, it unfreeze and continue recording. So, how to make display:1.0 not to freeze, while on display:0.0? Tested some more. open [display 0.0] open [display 0.1] from [display 0.0] = open => [display 0.2] same problem For different users and same users results are the same. ffmpeg keeps recording that paused image. Looks like WM root window need to be active, to be recorded.

    Read the article

  • Visual Studio 2010 Beta 2, built-in font smoothing

    - by L. Shaydariv
    I've just installed Visual Studio 2010 Beta 2 onto my Windows XP to evaluate it and check whether it meets my preferences the way it did before. Okay, I've temporary defeated an urgent bug with a strange workaround (I could not open any file from the Solution Explorer), and it left bad memories to me. But however, it's okay. The first thing I've seen just opening the code editor was ClearType font rendering. Wow, so unexpectedly. I must note that I do not use standard Windows rendering techniques, but I still prefer GDI++, a font renderer developed by Japanese developers. (GDI++ allows to render the fonts in Mac/Win-Safari style over entire Windows.) Personally for me, GDI++ reaches the great font-rendering results allowing me to use the Dejavu Sans Mono font with really nice smoothing in Visual Studio 2008 (VS 2005 too, though VS 2005 crashes in this case). But GDI++ cannot affect Visual Studio 2010 Beta 2 text editor - it uses ClearType (right?), and it does not care about the system font smoothing settings. It could be an editor based on WPF, right? So as far as I can see, I can't use GDI++ anymore because it uses Windows GDI(+) but no WPF? So I've got several questions: Is it possible to disable VS 2010 b2 built-in ClearType or override it with another font smoother? Is it possible to install a Safari-like font renderer for Visual Studio 2010 [betas]? Thanks a lot.

    Read the article

  • Anonymous access to SMB share hosted on Server 2008 R2 Enterprise

    - by bwerks
    Hi all, First off, I have read through this post and a whole slew of non-SF posts which seem to address the same or a similar problem, however I was still unable to fix my problem. I've got three machines in this situation: a domain-joined server that runs Server 2008 R2 Enterprise ("share server") a domain-joined workstation running XP Pro SP3 ("test server") a domain-unjoined test server running Server 2003 R2 SP2 ("workstation") The share server is exposing a share on the network that the test server must access--it's a Source/Symbol Server share for our debugging purposes. I believe visual studio simply accesses the the share with its own credentials in this case, meaning that the share must be accessible anonymously since the test server isn't joined to the domain and there's no opportunity to supply domain authentication. I've attempted a lot of things to avoid the authentication window when accessing the share: I've enabled the Guest account on the share server and given Guest full sharing/NTFS permissions for the share. I've given ANONYMOUS LOGON full sharing/NTFS permissions for the share. I've added my share to “Network Access: Shares that can be accessed anonymously” in LSP. I've disabled “Network access: Restrict anonymous access to Named Pipes and Shares” in LSP. I've enabled “Network access: Let Everyone permissions apply to anonymous users” in LSP. Added ANONYMOUS LOGON to “Access this computer from the network” in LSP. Added the Guest account to “Access this computer from the network” in LSP. Attempted to provision the share using the Share and Storage Management MMC snap-in. Unfortunately when I attempt to access the share from the test server, I still see the prompt and I'm forced to enter "Guest" manually. I also tried this workflow using the local administrator account on a workstation, and the same thing happens both with and without XP Simple File Sharing enabled. Any idea why I'm getting these results, or what I should have done differently?

    Read the article

  • Bluescreen Stop 0x00000027 RDR_FILE_SYSTEM after cloning system on new HDD

    - by Daniel
    A couple of months ago I got a new 500GB HDD for my no-name-brand Laptop PC and I cloned the complete Win 7 Pro 32bit system with clonezilla from the old 70GB drive to the new one. At first everything was great, the new driver was immediately updated. But since then I get on a more and more frequent level (used to be every 2-3 days, but now it's more like 2-3 times a day) a BSOD Stop error. From the eventlog in Windows I know that there are two different error codes sppoking aroung: 0x00000027 (0xbaad0073, 0x9954f80c, 0x9954f3f0, 0x8ecd7c82) RDR_FILE_SYSTEM 0x00000044 (0x85443230, 0x00000eae, 0x00000000, 0x00000000) MULTIPLE_IRP_COMPLETE_REQUESTS I checked for viruses and did a complete HDD check using the Windows tool and WesternDigital tool (which is the producer of the new HDD) without results. I also looked for driver updates but couldn't find any. The name of the HDD as shown in the device manager is: WDC WD5000BPVT-00HXZT1 ATA Device. I'm really a noob regarding those kind of problems, so if you have any idea what I can try without losing all my data, let me know. Also, if any additional information are required.

    Read the article

  • Win7 Ultimate 64bit not finding my SSD, any ideas?

    - by Jakub
    So I am trying to install Windows 7 Ultimate 64bit on my new SSD (Kingston SNVP325-S2/64GB) and I have unplugged my other drives (for the install only). The SSD is detected by BIOS, I am running the latest Firmware for my motherboard P5N-E SLI (775 socket). Upon getting the Windows 7 install screen, it gives me a 'no drives detected'. I have tried Load Driver and downloding nForce drivers (latest for Win7 64bit signed WHQL) results in a message of (something like) "To continue please click Load Driver and load 32-bit and signed 64-bit drivers... blah blah" ... basically it does NOT accept my nForce SATA drivers, nor my 2ndary SATA drivers (JMicron? can't recall the exact name now). I have tried F8 at loading to 'disable driver signing' in hopes that it was an unsigned driver, however nothing works. The SSD is not detected by the Windos 7 installer. I wasted 4 hours on this last night, and gave up, and got nowhere. Anyone heard / ran into this issue before? How can I get the drive detected? Some more details: - Kingston SNVP325-S2/64GB V+ SSD - ASUS P5N-E SLI MOBO - 8GB RAM A-DATA (memory is checked out and fine)

    Read the article

  • How to tell statd to use portmap on a non-localhost ipadress?

    - by jneves
    How can I make statd connect to other IP address other than 127.0.0.1? I have a server that is connected to 2 different networks (one is public, another a private). I want it to provide a NFS share for only the private network. The host in an ubuntu 8.04. The private ip address is 192.168.1.202 I changed /etc/default/portmap to add: OPTIONS="-i 192.168.1.202" The command lsof -n | grep portmap returns: portmap 10252 daemon cwd DIR 202,0 4096 2 / portmap 10252 daemon rtd DIR 202,0 4096 2 / portmap 10252 daemon txt REG 202,0 15248 13461 /sbin/portmap portmap 10252 daemon mem REG 202,0 83708 32823 /lib/tls/i686/cmov/libnsl-2.7.so portmap 10252 daemon mem REG 202,0 1364388 32817 /lib/tls/i686/cmov/libc-2.7.so portmap 10252 daemon mem REG 202,0 31304 16588 /lib/libwrap.so.0.7.6 portmap 10252 daemon mem REG 202,0 109152 16955 /lib/ld-2.7.so portmap 10252 daemon 0u CHR 1,3 960 /dev/null portmap 10252 daemon 1u CHR 1,3 960 /dev/null portmap 10252 daemon 2u CHR 1,3 960 /dev/null portmap 10252 daemon 3u unix 0xecc8c3c0 4332992 socket portmap 10252 daemon 4u IPv4 4332993 UDP 192.168.1.202:sunrpc portmap 10252 daemon 5u IPv4 4332994 TCP 192.168.1.202:sunrpc (LISTEN) portmap 10252 daemon 6u REG 0,12 289 3821511 /var/run/portmap_mapping I defined in /etc/hosts the following: 192.168.1.202 server.local In /etc/default/nfs-common I changed STATDOPTS to: STATDOPTS="--name server.local" Yet when I run /etc/init.d/nfs-common start if fails to start. The log shows: Jun 8 06:37:44 cookwork-web1 rpc.statd[9723]: Version 1.1.2 Starting Jun 8 06:37:44 cookwork-web1 rpc.statd[9723]: Flags: Jun 8 06:37:44 cookwork-web1 rpc.statd[9723]: unable to register (statd, 1, udp). An strace -f rpc.statd -n server.local results in a lot of lines, including this one: sendto(9, "\200]3\362\0\0\0\0\0\0\0\2\0\1\206\240\0\0\0\2\0\0\0\1"..., 56, 0, {sa_family=AF_INET, sin_port=htons(111), sin_addr=inet_addr("127.0.0.1")}, 16) = 56

    Read the article

  • Strange activity in My Pictures folder: Thumbnail doesn't match actual picture.

    - by Sam152
    After finding an amusing picture on a popular imageboard, I decided to save it. A few days past and I was browsing my images folder when I realised that the thumbnail generated by Windows XP in the thumbnails view did not match the actual image. Here is a comparison image: What's even stranger in this situation is that the parts of the photograph that are different have actually been replaced with what might be the correct background. Furthermore, it is a jpeg (no PNG transparency tricks) that is 343 kilobytes but only 847x847 pixels wide. What could be going on here? Could there be anything malicious in the works, or hidden data? Before anyone asks, I have checked and preformed the following: Deleted Thumbs.db to reload thumbnails. Opened image in different editors. (they appear with the text) Moved image to a different directory. Changed the extension to .rar. All these steps produce the same results. Pre actual posting update: It seems that opening the image in paint, changing the image entirely (deleting entire contents and making a red fill) will still generate the original thumbnail, even after deleting Thumbs.db etc. I'm also hesitant to post the original data, in case there is something malicious or hidden that could be potentially illegal. (Although it would be very beneficial to see if it works on other computers and not just my own).

    Read the article

  • How come my Intel 520 180GB SSD performs extremely poorly?

    - by Willem
    I recently installed a new Intel 520 series 180GB SSD in my brand new MacBook Pro. The system is as follows: Model: MacBook Pro 15-inch, Late 2011 (MacBookPro8,2) Processor: 2.4 GHz Intel Core i7 Memory: 16 GB 1333 MHz DDR3 Graphics: AMD Radeon HD 6770M 1024 MB Software: Mac OS X Lion 10.7.3 Main Drive Bay: Intel 520-series 180GB SATA-3 (6GB/s negotiated link) SSD (Firmware: 400i) [80GB free] Optical Bay: Toshiba 5400 RPM 750GB SATA-2 HDD Trim: Enabled (according to Trim Enabler App) And here are the speeds I'm getting: Read: 412 MB/s Write: 186 MB/s What have I done wrong? Results expected: Read/write both 500MB/s I have seen benchmarks with lesser SSD:s (SATA-2 even) outperform my write-speeds by far. Also, Intel 520 SSD:s are supposed to be the top class of SSD:s. Trim Enabler report: This looks a bit odd compared to screenshots from their site: These is the defined S.M.A.R.T attributes (taken from Intel): And here are my S.M.A.R.T attributes read using smartctl tool from smartmontools: They don't seem very compatible. I'm going to try and look for a S.M.A.R.T attributes reader tool for OS X which might support Intel 520 series.

    Read the article

  • Hiera datatypes wont load in Puppet

    - by Cole Shores
    I have spent a couple of days on this, followed the instructions on http://downloads.puppetlabs.com/docs/puppetmanual.pdf and even the Puppet Training Advanced Puppet manual. When I run a test against it, the results always come back as 'nil' and Im not sure why. I am running Puppet 3.6.1 Community Edition, with Hiera 1.2.1 on SLES 11. My puppet.conf file at /etc/puppet/puppet.conf consists of: [main] # The Puppet log directory. # The default value is '$vardir/log'. logdir = /var/log/puppet # Where Puppet PID files are kept. # The default value is '$vardir/run'. rundir = /var/run/puppet # Where SSL certificates are kept. # The default value is '$confdir/ssl'. ssldir = $vardir/ssl certificate_revocation = false [master] hiera_config=/etc/puppet/hiera.yaml reporturl = http://puppet2.vvmedia.com/reports/upload ssl_client_header = SSL_CLIENT_S_DN ssl_client_verify_header = SSL_CLIENT_VERIFY # certname = dev-puppetmaster2.vvmedia.com # ca_name = 'dev-puppetmaster2.vvmedia.com' # facts_terminus = rest # inventory_server = localhost # ca = false [agent] # The file in which puppetd stores a list of the classes # associated with the retrieved configuratiion. Can be loaded in # the separate ``puppet`` executable using the ``--loadclasses`` # option. # The default value is '$confdir/classes.txt'. classfile = $vardir/classes.txt # Where puppetd caches the local configuration. An # extension indicating the cache format is added automatically. # The default value is '$confdir/localconfig'. localconfig = $vardir/localconfig my /etc/puppet/hiera.yaml consists of: :backends: yaml :yaml: :datadir: /etc/puppet/hieradata :hierarchy: - common - database I have a directory created in /etc/puppet/hieradata and within it contains: /etc/puppet/hieradata/common.yaml :nameserver: ["dnsserverfoo1", "dnsserverfoo2"] :smtp_server: relay.internalfoo.com :syslog_server: syslogfoo.com :logstash_shipper: logstashfoo.com :syslog_backup_nfs: nfsfoo:/vol/logs :auth_method: ldap :manage_root: true and /etc/puppet/hieradata/database.yaml :enable_graphital: true :mysql_server_package: MySQL-server :mysql_client_package: MySQL-client :allowed_groups_login: extranet_users does anyone have any idea what could be causing Hiera to not load the requested values? I have tried even restarting the Master. Thanks in advance, Cole

    Read the article

  • Force Juniper-network client to use split routing

    - by craibuc
    I'm using the Juniper client for OSX ('Network Connect') to access a client's VPN. It appears that the client is configured to not use split-routing. The client's VPN host is not willing to enable split-routing. Is there a way for me to over-ride this configuration or do sometime on my workstation to get the non-client network traffic to by-pass the VPN? This wouldn't be a big deal, but none of my streaming radio stations (e.g. XM) work will connected to their VPN. Apologies for any inaccuracies in the terminology. ** edit ** The Juniper client changes my system's resolve.conf file from: nameserver 192.168.0.1 to: search XXX.com [redacted] nameserver 10.30.16.140 nameserver 10.30.8.140 I've attempted to restore my preferred DNS entry to the file $ sudo echo "nameserver 192.168.0.1" >> /etc/resolv.conf but this results in the following error: -bash: /etc/resolv.conf: Permission denied How does the super-user account not have access to this file? Is there a way to prevent the Juniper client from making changes to this file?

    Read the article

  • how to prevent other computers from seeing our network computers through vpn

    - by Disco
    We have a local office domain consisting of Windows 7 and XP machines that is running on Windows Server 2008 R2. We also have users that connect via VPN into our network. My concern is that when a remote user opens up a folder, the Network section on the left side of the folder shows the remote user all the computer names in our local network. I would like to go about renaming our computers in the local network with more descriptive computer names, but I do not want the users off-site to be able to see these computer names by simply opening up a folder. (Granted, they can already do this, but our current naming scheme does not link computer names to users.) I would like to change our computer names so we can determine which computer belongs to which user more easily IF it can be done securely. How can I ensure that our local computer names are not showing up in the Network folder for remote, VPN-connected users? My online searches have turned up results where people are advised to turn off Network Sharing and Discovery, but that seems to only ensure that the local machine doesn't see other computer names. I want to prevent OUR computer names from showing up on OTHER computers, and I can't go into the VPN-connected computers and turn off THEIR Network Discovery settings. I would think there is a group policy that would control this but I have not found one yet and I don't know how I would apply it to VPN-connected computers. Thanks! EDIT: That's true, a Group Policy wouldn't run on users only connecting via VPN, good point. What about a VPN/router policy, then?

    Read the article

  • IIS 7.0 - responses throttled to 500ms blocks?

    - by Julia Hayward
    Scenario: ASP.NET MVC wep app sitting on my local machine (Vista Ultimate, IIS 7.0), nothing going on except one user (me) logged in and viewing an index page. The page includes 9 dynamic images drawn from the underlying DB and returned from a controller action. I have got the actual processing time for these images down to 15ms each. Turn on Firebug and watch the page load. What I see is 9 requests for images firing off together – no surprise – but four come back to me almost immediately; two more after 0.5s; another after 1s; then at 1.5s and 2s. Logging on the server side suggests the individual responses are still only taking 15ms. So it appears IIS is queueing things up into 500ms chunks. (Repeating the experiment produces different results, but each time the images return in similar blocks – you might get three in the first group, then three at 0.5s, two at 1s etc, for example – and it’s always at 500ms intervals, not anything else.) It’s also repeatable cross-browser, and it’s not repeatable with other forms of content. I haven't found any particular mention of this problem out there, so I'm sort of assuming it's not an IIS bug, so is it: i) IIS on desktop OSs deliberately does it, to make you use server OSs in production? ii) There is some magical setting that has eluded me for as long as I’ve known IIS? iii) Something peculiar to MVC or SQL Server 2008? or something else?

    Read the article

  • chkconfig creating service symlinks with the wrong order

    - by Robert
    On RHEL 6.3, I have a system service that should be starting after postgresql and httpd (order 64 and 85, respectively), but chkconfig always places it at order 50. I tried an experiment on a CentOS 6.0 virtual machine to make sure I understood the LSB stanza syntax. I created /etc/init.d/foo, owner root, permissions 755, with this text: ### BEGIN INIT INFO # Provides: foo # Required-Start: postgresql httpd # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Description: Foo init script ### END INIT INFO And then ran chkconfig --add foo. Result: /etc/rc5.d/S86foo is created, as expected. (The other runlevels are also as expected.) I repeated the exact same experiment on the RHEL machine, and it created /etc/rc5.d/S50foo instead. I can't see anything different between the two that would lead to different results. Both machines have postgresql and httpd starting at the same orders and runlevels. Any thoughts? I could just use # chkconfig: 2345 86 50, or manually rename the service symlinks to the correct order, but I'm trying to document an install process for later users, and I want to know how to do it right and understand why it's not working as expected.

    Read the article

< Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >