Search Results

Search found 2848 results on 114 pages for 'nor'.

Page 107/114 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • MS Securily Essentials efficiency / usage, suspicious processes

    - by biggvsdiccvs
    I recently noticed that my (originally pretty fast) Windows 7 Pro laptop started getting slow and using a lot of CPU power for no apparent reason. A full scan by Microsoft Security Essentials revealed nothing. After some investigation, I found multiple instances of a strange process called urpev.exe and a couple of similar exe files sitting in subdirectories of Users//AppData/Roaming (this particular one was in a folder called Xyceowme). Description: "Mescrosift Visaal Studie 2010". Company name: "Mesrosift Corporatien". Is it a virus or something? :) Now, all of these exe files were scheduled to be started from the Task Scheduler by tasks with names like "Security Center Update - 1291373911" and similar. My user name was listed as the author of the tasks. I disabled the tasks, restarted the computer in safe mode and moved all of the exe files to quarantine for further investigation. All of this was done last night. I just scanned the files with Security Essentials again (not updated since yesterday) in the quarantine location and this time it found PWS:Win32/Zbot.gen!plock in urpev.exe (but not in the other exe files, which are most likely viruses, too). Category: Password Stealer Description: This program is dangerous and captures user passwords. Another strange process is browser.exe (not chrome.exe) by Google Inc., described as Google Chrome. I uninstalled Chrome but it's still there. It runs out of Users\\AppData\LocalLow\UIVoice\ToolMedium\browser.exe and if I move it in safe mode, it just reappears there, and multiple instances run. Needless to say, it I kill it, it just runs again. Couldn't see anything in Task Scheduler, but found a couple of references to it in the Registry Editor: HKEY_CURRENT_USER/Software/Microsoft/Internet Explorer/LowRegistry/Audio/PolicyConfig/PropertyStore/ HKEY_USERS/S-1-5-21-1685709306-872053864-2599010960-1002/Software/Microsoft/Internet Explorer/LowRegistry/Audio/PolicyConfig/PropertyStore/ Maybe it's a legit process, but seems kind of strange. For the time being, I suspended the process and killed all of the child processes when I booted up the laptop. I used Security Essentials to scan the system periodically, but obviously it's not effective at least against one virus. I had the "real-time protection" turned off. Would it help if it were turned on and how much of a nuisance would it be? I wonder if there is a better alternative to Security Essentials. Over the years I've used multiple antivirus products at home and especially at work and was not very happy with any of them. Apparently, asking for software recommendations or comparisons is taboo here, but I will mention that I installed Malware Bytes and it was able to find an quarantine a bunch of suspicious files, and at least some of which were truly infected, but when it scans the bogus security center update executables from Mesrosift Corporatien, it finds nothing wrong. Also, any thoughts on the browser.exe mystery? Neither MS Security Essentials nor Malware Bytes found anything wrong with that file. However, after I ran a Malware Bytes scan and quarantined everything it found suspicious and rebooted the laptop, the process did not run.

    Read the article

  • Blocking 'good' bots in nginx with multiple conditions for certain off-limits URL's where humans can go

    - by Glenn Plas
    After 2 days of searching/trying/failing I decided to post this here, I haven't found any example of someone doing the same nor what I tried seems to be working OK. I'm trying to send a 403 to bots not respecting the robots.txt file (even after downloading it several times). Specifically Googlebot. It will support the following robots.txt definition. User-agent: * Disallow: /*/*/page/ The intent is to allow Google to browse whatever they can find on the site but return a 403 for the following type of request. Googlebot seems to keep on nesting these links eternally adding paging block after block: my_domain.com:80 - 66.x.67.x - - [25/Apr/2012:11:13:54 +0200] "GET /2011/06/ page/3/?/page/2//page/3//page/2//page/3//page/2//page/2//page/4//page/4//pag e/1/&wpmp_switcher=desktop HTTP/1.1" 403 135 "-" "Mozilla/5.0 (compatible; G ooglebot/2.1; +http://www.google.com/bot.html)" It's a wordpress site btw. I don't want those pages to show up, even though after the robots.txt info got through, they stopped for a while only to begin crawling again later. It just never stops .... I do want real people to see this. As you can see, google get a 403 but when I try this myself in a browser I get a 404 back. I want browsers to pass. root@my_domain:# nginx -V nginx version: nginx/1.2.0 I tried different approaches, using a map and plain old nono if's and they both act the same: (under http section) map $http_user_agent $is_bot { default 0; ~crawl|Googlebot|Slurp|spider|bingbot|tracker|click|parser|spider 1; } (under the server section) location ~ /(\d+)/(\d+)/page/ { if ($is_bot) { return 403; # Please respect the robots.txt file ! } } I recently had to polish up my Apache skills for a client where I did about the same thing like this : # Block real Engines , not respecting robots.txt but allowing correct calls to pass # Google RewriteCond %{HTTP_USER_AGENT} ^Mozilla/5\.0\ \(compatible;\ Googlebot/2\.[01];\ \+http://www\.google\.com/bot\.html\)$ [NC,OR] # Bing RewriteCond %{HTTP_USER_AGENT} ^Mozilla/5\.0\ \(compatible;\ bingbot/2\.[01];\ \+http://www\.bing\.com/bingbot\.htm\)$ [NC,OR] # msnbot RewriteCond %{HTTP_USER_AGENT} ^msnbot-media/1\.[01]\ \(\+http://search\.msn\.com/msnbot\.htm\)$ [NC,OR] # Slurp RewriteCond %{HTTP_USER_AGENT} ^Mozilla/5\.0\ \(compatible;\ Yahoo!\ Slurp;\ http://help\.yahoo\.com/help/us/ysearch/slurp\)$ [NC] # block all page searches, the rest may pass RewriteCond %{REQUEST_URI} ^(/[0-9]{4}/[0-9]{2}/page/) [OR] # or with the wpmp_switcher=mobile parameter set RewriteCond %{QUERY_STRING} wpmp_switcher=mobile # ISSUE 403 / SERVE ERRORDOCUMENT RewriteRule .* - [F,L] # End if match This does a bit more than I asked nginx to do but it's about the same principle, I'm having a hard time figuring this out for nginx. So my question would be, why would nginx serve my browser a 404 ? Why isn't it passing, The regex isn't matching for my UA: "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.30 Safari/536.5" There are tons of example to block based on UA alone, and that's easy. It also looks like the matchin location is final, e.g. it's not 'falling' through for regular user, I'm pretty certain that this has some correlation with the 404 I get in the browser. As a cherry on top of things, I also want google to disregard the parameter wpmp_switcher=mobile , wpmp_switcher=desktop is fine but I just don't want the same content being crawled multiple times. Even though I ended up adding wpmp_switcher=mobile via the google webmaster tools pages (requiring me to sign up ....). that also stopped for a while but today they are back spidering the mobile sections. So in short, I need to find a way for nginx to enforce the robots.txt definitions. Can someone shell out a few minutes of their lives and push me in the right direction please ? I really appreciate ANY response that makes me think harder ;-)

    Read the article

  • I cut-to-move DCIM folder to ext SD when an auto android OS update popped up b4 I could choose target - Cannot recover 200+ photos

    - by ZeroG
    I was downloading my Exhibit II's DCIM camera folder (with month's of photos inside) to its external SD card, in order to transfer them into my laptop. In my overconfidence, I hurriedly chose cut-to-move (rather than copy-to-move) when KABOOM! —an automatic Android OS update popped up before I could choose the target!!! I figured everything was in cache & calmly tried to go through with the update. But that was not a typically seamless event. It showed downloading icon but hmm… since I rooted the phone it brought the command line up & recovery sequence. But neither Android nor I had yet downloaded any alternate custom ROM Files to internal SD to update from! So were they trying to make me unroot my phone by giving me some bogus update on the fly or just give me a hard time in trying to hand me down an unrooted ROM that I'd have to figure out how to root again? Yes, I know there was that blurb about overwriting a file of the same name but I was trying to shake the darn stubborn update being forced on my phone during this precarious moment. I thought I had frozen or turned off all those auto-updates previously. Anyway, phones are small & fingers are big (sigh)... I tried to reboot into safe mode but the resultant photo file was partially overwritten (200 files had names but Zero bytes in them). I thought maybe it was still hung in cache or deposited somewhere else but I have searched everywhere with file managers. Since I did not have Titanium backing up camera, photo folder or gallery, I cannot recover 200+ photos. Dumb. You can understand my dilemma as I am involved in the arts & although just a camera phone, most of these photos were historic & aesthetic or at least as to subject matter. Photo-ops don't reoccur. I have tried a couple of recovery apps from the market like Search Duplicates & Recover to no avail. I was only able to salvage stuff I'd sent out in messages. I've got several decades in computers & this is such a miserable beginner's piece of bad luck I can't believe it happened to me. They were precious photos! Yes, I turned on Titanium since & yes I even tried USB to laptop recoveries. Being on a MacBookPro I'm trying androidfiletransfer.dmg, but I'd have to upgrade to Peach Sunrise to get above Android 3.0 for that App to recognize the phone via USB & the programmer says installation zeros your data, so that pretty much toasts any secret hidden places where these photos may have been deposited. Don't want to do that & am still trying to find them. They certainly didn't make it to my external SD Card. If any of you techies out there know anything, please help & thanks. Despite decades of being in computing, unfamiliar & ever-changing hard or software can humble even the most seasoned veterans.

    Read the article

  • VirtualBox - Public Static IP for a Debian Guest on a Dedicated Server

    - by user86296
    Goal: I want to run a Debian-squeeze-Guest in VirtualBox and it's own public static ip. I found tons of threads about this topic, but all in all I'm now trying for 10 hours (reading the manual, the forums, trying to learn about networking concepts & commands) to give a Guest his own public static ip (so that the Guest is similar to a vServer you can order from a hosting company), but wasn't able to. Since I'm a big noob as far as networking stuff is concerned, I'm probably doing something wrong.(please bear with me :-) ) Situation: VirtualBox 4.0.10 (headless no gui) is running on a dedicated Debian-Server, the Guest OS is Debian as well. The server has a static ip and I ordered an additional ip for a VM. Problem description: Upto now I was able to use NAT to access the VM from the outside and to setup an internal network between several Guests and all of this worked very well. When setting NIC 1 to bridged and configuring a public static ip on the guest, the guest was unpingable. (neither from outside, nor from the host) I could connect to the guest via the internal network, from another vm, though. ( VBoxManage controlvm VMGuest nic1 bridged eth0 ) ( configuration attempt of static-ip on the guest '/etc/network/interfaces' is below) Please let me know what I'm doing wrong, or what I can try to get it to work, or if you need more info. I think I've read that with a current VirtualBox-version for bridged networking no special host-configuration is necessary, is that accurate, or might that be the problem? Additional Info Info I got from the hosting company about the additional IP Please note that you can use the IP address only for this server. IP: 46.4.xx.xx Gateway: 46.4.xx.xx Mask: 255.255.255.248 VBoxManage showvminfo VMGuest |less ... NIC 1: MAC: 080027D72F7B, Attachment: Bridged Interface 'eth0', Cable connected: on, Trace: off (file: none), Type: 82540EM, Reported speed: 0 Mbps, Boot priority: 0 NIC 2: MAC: 080027B03B75, Attachment: Internal Network 'InternalNet1', Cable connected: on, Trace: off (file: none), Type: Am79C973, Reported speed: 0 Mbps, Boot priority: 0 NIC 3: disabled (...rest is disabled) cat /etc/network/interfaces on the Host-machine # Loopback device: auto lo iface lo inet loopback # device: eth0 auto eth0 iface eth0 inet static address 46.4.xx.xx broadcast 46.4.xx.xx netmask 255.255.255.224 gateway 46.4.xx.xx post-up mii-tool -F 100baseTx-FD eth0 # default route to access subnet up route add -net 46.4.xx.xx netmask 255.255.255.224 gw 46.4.xx.xx eth0 cat /etc/network/interfaces on the Guest-VM # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface allow-hotplug eth0 auto eth0 iface eth0 inet static address 46.4.xx.xx netmask 255.255.255.248 gateway 46.4.xx.xx auto eth1 iface eth1 inet dhcp ifconfig -a on the Guest shows the correct static ip for eth0 but the Guest is unreachable "over eth0" eth0 Link encap:Ethernet HWaddr 08:00:27:d7:2f:7b inet addr:46.4.xx.xx Bcast:46.4.xx.xx Mask:255.255.255.248 inet6 addr: fe80::a00:27ff:fed7:2f7b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:21 errors:0 dropped:0 overruns:0 frame:0 TX packets:69 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1260 (1.2 KiB) TX bytes:3114 (3.0 KiB) eth1 Link encap:Ethernet HWaddr 08:00:27:b0:3b:75 inet addr:192.168.10.3 Bcast:192.168.10.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:feb0:3b75/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:142 errors:0 dropped:0 overruns:0 frame:0 TX packets:92 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:15962 (15.5 KiB) TX bytes:14540 (14.1 KiB) Interrupt:16 Base address:0xd240 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:123 errors:0 dropped:0 overruns:0 frame:0 TX packets:123 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:25156 (24.5 KiB) TX bytes:25156 (24.5 KiB)

    Read the article

  • Trouble with site-to-site OpenVPN & pfSense not passing traffic

    - by JohnCC
    I'm trying to get an OpenVPN tunnel going on pfSense 1.2.3-RELEASE running on embedded routers. I have a local LAN 10.34.43.0/254. The remote LAN is 10.200.1.0/24. The local pfSense is configured as the client, and the remote is configured as the server. My OpenVPN tunnel is using the IP range 10.99.89.0/24 internally. There are also some additional LANs on the remote side routed through the tunnel, but the issue is not with those since my connectivity fails before that point in the chain. The tunnel comes up fine and the logs look healthy. What I find is this:- I can ping and telnet to the remote LAN and the additional remote LANs from the local pfSense box's shell. I cannot ping or telnet to any remote LANs from the local network. I cannot ping or telnet to the local network from the remote LAN or the remote pfSense box's shell. If I tcpdump the tun interfaces on both sides and ping from the local LAN, I see the packets hit the tunnel locally, but they do not appear on the remote side (nor do they appear on the remote LAN interface if I tcpdump that). If I tcpdump the tun interfaces on both sides and ping from the local pfSense shell, I see the packets hit the tunnel locally, and exit the remote side. I can also tcpdump the remote LAN interface and see them pass there too. If I tcpdump the tun interfaces on both sides and ping from the remote pfSense shell, I see the packets hit the remote tun but they do not emerge from the local one. Here is the config file the remote side is using:- #user nobody #group nobody daemon keepalive 10 60 ping-timer-rem persist-tun persist-key dev tun proto udp cipher BF-CBC up /etc/rc.filter_configure down /etc/rc.filter_configure server 10.99.89.0 255.255.255.0 client-config-dir /var/etc/openvpn_csc push "route 10.200.1.0 255.255.255.0" lport <port> route 10.34.43.0 255.255.255.0 ca /var/etc/openvpn_server0.ca cert /var/etc/openvpn_server0.cert key /var/etc/openvpn_server0.key dh /var/etc/openvpn_server0.dh comp-lzo push "route 205.217.5.128 255.255.255.224" push "route 205.217.5.64 255.255.255.224" push "route 165.193.147.128 255.255.255.224" push "route 165.193.147.32 255.255.255.240" push "route 192.168.1.16 255.255.255.240" push "route 192.168.2.16 255.255.255.240" Here is the local config:- writepid /var/run/openvpn_client0.pid #user nobody #group nobody daemon keepalive 10 60 ping-timer-rem persist-tun persist-key dev tun proto udp cipher BF-CBC up /etc/rc.filter_configure down /etc/rc.filter_configure remote <host> <port> client lport 1194 ifconfig 10.99.89.2 10.99.89.1 ca /var/etc/openvpn_client0.ca cert /var/etc/openvpn_client0.cert key /var/etc/openvpn_client0.key comp-lzo You can see the relevant parts of the routing tables extracted from pfSense here http://pastie.org/5365800 The local firewall permits all ICMP from the LAN, and my PC is allowed everything to anywhere. The remote firewall treats its LAN as trusted and permits all traffic on that interface. Can anyone suggest why this is not working, and what I could try next?

    Read the article

  • How to recover data files from xampp-windows to xampp-linux after crash?

    - by David Buehler
    My Windows box died after I developed a database in xampp on it; fortunately I have a backup of the entire F:/TestWeb/Xampp partition. Unfortunately, I did not do an Export (nor dump) of the "Lws2" database before the crash. I have replaced the defunct machine with one running Mint7 (based on Ubuntu 9.04 "Jaunty Jackalope") and installed xampp-linux into the /opt partition, so the new xampp now runs fine in /opt/lampp, and says all the elements are secured by passwords (which I just assigned during this installation.) I assumed that Xamp-Windows installed in November would migrate easily to xampp-linux installed iin February -- a bad assumption. It apparently would have been simple if I had known enough to do an Export or a Dump before the crash, but.... The backup was done to a Network Attached Storage drive, which is formatted as "vfat" so the backup does not carry with it any valid ownership permissions from MySql on NTFS. I now see from my backup that the old data resided in \TestWeb\Xampp\Mysql\Data\Lws2\ and consists of 7 ".frm" files which define my tables. The actual data -- I suppose a ".sql" file or files -- has disappeared, and I am resigning myself to two days of retyping it. But I do not wish to do the table layouts all over again. So I copied Data tree to /opt/lampp/Data -- PhpMyAdmin does not see it. So I copied Lws2 tree to /opt/lampp/Lws2 -- PhpMyAdmin does not see it. So I copied Data tree to /opt/lampp/var/mysql/Data -- PhpMyAdmin does not see it. So I copied Lws2 tree to /opt/lampp/var/mysql/Lws2 -- PhpMyAdmin does not see it. So I adjusted all the permissions to stop saying owner "nobody" to owner "root" and gave full permissions to all groups and to all others, with permissions percolating down, in all 4 trees. You guessed it -- PhpMyAdmin does not see any database named Lws2, only its 4 default ones. I double-checked the permissions and rebooted Linux and repeated the tests. At some point in that process I did see PhpMyAdmin showing "lws2(7)" but when I clicked on it I saw a "no table found" message. I have not been able to recreate that experience. Apparently there are some setup files for MySql and for PhpMyAdmin which need to be set up by running a wizard or two or by editing the files directly. I grepped the TestWeb tree and found an old "ldir = "C:TestWeb\Xampp\MySql\" and a "DataDir = C:TestWeb\Xampp\MySql\" in a .php file and in a .bat file, but I cannot find the corresponding config file names on the /opt partition/ -- so it looks as if these wizards have not been run to create them. What config files files does Linux use to setup MySql config files for PhpMyAdmin? What wizards do I need to run to point the MySql engine and the PhpMyAdmin at the folder /opt/lampp/data/ with its lws2 folder inside it? Or which files do I need to edit, with a sample of what it normally says under Linux? Incidentally, I remember I converted from MyISAM with its .MYD and .MYI files to InnoDB after entering only a small amount of the data -- and I do not know what file types to look for -- perhaps my data is still there but under another guise or in another place? Is it something as simple as linux needing to see "/data/" instead of /Data? I will check that out while waiting for a response. If anyone can point me to documentation that discusses this level of detail -- I will read it avidly! In any case, thanks for any clarification you can give on this thorny problem. wizdum

    Read the article

  • overriding new ubuntu installation

    - by tkoomzaaskz
    I've got a ubuntu 11.10 which has lost its support in May 2013, now I'd like to reintall up to the most up-to-date LTS, which is 12.04. My question is regarding my current partitions and doing backups. Is there a safe way to backup my data on some local partitions instead of copying files into DVDs/external drives (this is very uncormortable in my situation). Following are system commands shoing my disk: $ lsblk NAME MAJ:MIN RM SIZE RO MOUNTPOINT sda 8:0 0 232,9G 0 +-sda1 8:1 0 48,8G 0 +-sda2 8:2 0 63G 0 +-sda3 8:3 0 1K 0 +-sda4 8:4 0 53,7G 0 / +-sda5 8:5 0 18,6G 0 +-sda6 8:6 0 25,5G 0 +-sda7 8:7 0 23,3G 0 [SWAP] sr0 11:0 1 1024M 0 and $ sudo fdisk -l [sudo] password for xyz: Disk /dev/sda: 250.1 GB, 250059350016 bytes glowic: 255, sektorów/sciezke: 63, cylindrów: 30401, w sumie sektorów: 488397168 Jednostka = sektorów, czyli 1 * 512 = 512 bajtów Rozmiar sektora (logiczny/fizyczny) w bajtach: 512 / 512 Rozmiar we/wy (minimalny/optymalny) w bajtach: 512 / 512 Identyfikator dysku: 0xc3ffc3ff Device Boot Beginning End Blocks ID System /dev/sda1 * 2048 102402047 51200000 7 HPFS/NTFS/exFAT /dev/sda2 215044096 347080703 66018304 7 HPFS/NTFS/exFAT /dev/sda3 347082750 488392064 70654657+ 5 Extended /dev/sda4 102402048 215042047 56320000 83 Linux /dev/sda5 395905923 434975939 19535008+ 83 Linux /dev/sda6 434976003 488392064 26708031 83 Linux /dev/sda7 347082752 395905023 24411136 82 Linux swap / Solaris In the beginning I had Windows Vista pre-installed with the machine when it was bought (damn!) and I installed linux (the one I have now). The windows-program in master boot record has been overriden by grub and now I can boot with both Windows and Linux. This is list of mounted devices: $ mount /dev/sda4 on / type ext4 (rw,errors=remount-ro,commit=0) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) fusectl on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,noexec,nosuid,nodev) gvfs-fuse-daemon on /home/tomasz/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=tomasz) It's strange (I don't remember such thing) that my current linux uses only one partition (/dev/sda4). But, anyway, it seems like that. My final question is: am I able to use one of the existing linux partitions for a backup and install ubuntu 12.04 without removing neither windows nor ubuntu 11.04? I mean - will grub automatically accept both old windows vista and 2 linuxes (old 11.10 and "new" 12.04)? Is there any hidden operation done while installation that could harm my custom-backup-partition while installing? my fstab file: proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda4 during installation UUID=d44e89f5-9da2-48eb-83b3-887652ec95d2 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda7 during installation UUID=bbe50535-ba57-434a-9272-211d859f0e00 none swap sw 0 0 sda5 and sda6 are trash partitions created during unsuccessful linux installation (this was linux installation before my current installation), I didn't delete these partitions, but I have access to them (and I can use them as backup partitions). edit: second question is: why does lsblk show /dev/sda having 232,9G while fdisk shows that it has 250.1GB? Where does the difference come from?

    Read the article

  • Command-line video editing in Linux (cut, join and preview)

    - by sdaau
    I have rather simple editing needs - I need to cut up some videos, maybe insert some PNGs in between them, and join these videos (don't need transitions, effects, etc.). Basically, pitivi would do what I want - except, I use 640x480 30 fps AVI's from a camera, and as soon as I put in over a couple of minutes of that kind of material, pitivi starts freezing on preview, and thus becomes unusable. So, I started looking for a command line tool for Linux; I guess only ffmpeg (command line - Using ffmpeg to cut up video - Super User) and mplayer (Sam - Edit video file with mencoder under linux) are so far candidates, but I cannot find examples of the use I have in mind.   Basically, I'd imagine there's an encoder and player tools (like ffmpeg vs ffplay; or mencoder vs mplayer) - such that, to begin with, the edit sequence could be specified directly on the command line, preferably with frame resolution - a pseudocode would look like: videnctool -compose --file=vid1.avi --start=00:00:30:12 --end=00:01:45:00 --file=vid2.avi --start=00:05:00:00 --end=00:07:12:25 --file=mypicture.png --duration=00:00:02:00 --file=vid3.avi --start=00:02:00:00 --end=00:02:45:10 --output=editedvid.avi ... or, it could have a "playlist" text file, like: vid1.avi 00:00:30:12 00:01:45:00 vid2.avi 00:05:00:00 00:07:12:25 mypicture.png - 00:00:02:00 vid3.avi 00:02:00:00 00:02:45:10 ... so it could be called with videnctool -compose --playlist=playlist.txt --output=editedvid.avi The idea here would be that all of the videos are in the same format - allowing the tool to avoid transcoding, and just do a "raw copy" instead (as in mencoder's copy codec: "-oac copy -ovc copy") - or in lack of that, uncompressed audio/video would be OK (although it would eat a bit of space). In the case of the still image, the tool would use the encoding set by the video files.   The thing is, I can so far see that mencoder and ffmpeg can operate on individual files; e.g. cut a single section from a single file, or join files (mencoder also has Edit Decision Lists (EDL), which can be used to do frame-exact cutting - so you can define multiple cut regions, but it's again attributed to a single file). Which implies I have to work on cutting pieces first from individual files first (each of which would demand own temporary file on disk), and then joining them in a final video file. I would then imagine, that there is a corresponding player tool, which can read the same command line option format / playlist file as the encoding tool - except it will not generate an output file, but instead play the video; e.g. in pseudocode: vidplaytool --playlist=playlist.txt --start=00:01:14 --end=00:03:13 ... and, given there's enough memory, it would generate a low-res video preview in RAM, and play it back in a window, while offering some limited interaction ( like mplayer's keyboard shortcuts for play, pause, rewind, step frame). Of course, I'd imagine the start and end times to refer to the entire playlist, and include any file that may end up in that region in the playlist. Thus, the end result of all this would be: command line operation; no temporary files while doing the editing - and also no temporary files (nor transcoding) when rendering final output... which I myself think would be nice. So, while I think that all of the above may be a bit of a stretch - does there exist anything that would approximate the workflow described above?

    Read the article

  • Unable to remove limit on memory usage for PHP script.

    - by Jess Telford
    The Situation I am having an issue with a PHP script getting the following error message: Fatal error: Out of memory (allocated 359923712) (tried to allocate 72 bytes) in /path/to/piwik/core/DataTable.php on line 969 The script I'm running is: /path/to/piwik/misc/cron/archive.sh I am assuming the numbers are Bytes, which means that total is approximately 360MB. For all intents and purposes, I have increased the memory limits on the server well above 360MB, yet this is the number (give or take a byte) it consistently errors out at. Please note: This question is not about fixing a memory leak in the script, nor about why the script itself is using so much memory. The script is part of the Piwik archiving process, so I cannot just fix any memory leaks, etc. For more info on this script and why I am increasing the memory limit, see "How to setup auto archiving" The question Given that the script is attempting to use over 360MB of memory, which I cannot change, why does it not seem possible for me to increase the amount of memory available to php on my server? What I've tried Increasing PHP's memory_limit Given the php.ini file: php -i | grep php.ini Configuration File (php.ini) Path => /usr/local/lib Loaded Configuration File => /usr/local/lib/php.ini I have edited that file, so the memory_limit directive reads; memory_limit = -1 Restart Apache, and check the new value has stuck; $ php -i | grep memory_limit memory_limit => -1 => -1 Run the script, and get the same error. I've also tried 1G, 768M, etc, all to the same result (ie; no change). Update 22nd June: Based on Vangel's help, I have attempted to set post_max_size to 20M in combination with setting memory_limit. Again, this has no effect. Removing the memory limit on child processes of Apache I have found and edited the httpd.conf file to make sure there is no RLimitMEM directive. I then used WHM's Apache Configuration Memory Usage Restrictions to generate a restriction, which it claimed was at 1000M (and confirmed by checking httpd.conf). Both of these resulted in no change to the script erroring at 360MB. Increasing the per process memory limits of Linux The current limits set on the system: $ ulimit -m 524288 $ ulimit -v 524288 I have attempted to set both of these to unlimited: $ ulimit -m unlimited $ ulimit -v unlimited $ ulimit -m unlimited $ ulimit -v unlimited Once again, this has resulted in absolutely no improvement in my problem. My setup $ cat /etc/redhat-release CentOS release 5.5 (Final) $ uname -a Linux example.com 2.6.18-164.15.1.el5 #1 SMP Wed Mar 17 11:30:06 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux $ php -i | grep "PHP Version" PHP Version => 5.2.9 $ httpd -V Server version: Apache/2.0.63 Server built: Feb 2 2011 01:25:12 Cpanel::Easy::Apache v3.2.0 rev5291 Server's Module Magic Number: 20020903:13 Server loaded: APR 0.9.17, APR-UTIL 0.9.15 Compiled using: APR 0.9.17, APR-UTIL 0.9.15 Architecture: 64-bit Server compiled with.... -D APACHE_MPM_DIR="server/mpm/prefork" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D HTTPD_ROOT="/usr/local/apache" -D SUEXEC_BIN="/usr/local/apache/bin/suexec" -D DEFAULT_PIDLOG="logs/httpd.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_LOCKFILE="logs/accept.lock" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="conf/mime.types" -D SERVER_CONFIG_FILE="conf/httpd.conf" Output of $ php -i: http://pastebin.com/EiRut6Nm

    Read the article

  • nginx serving php for download (previously: nginx multiple location alias 404)

    - by torsten
    Im having issues with the alias location in the following configuration server { listen 80; server_name localhost; root /srv/http/share; index index.php; include php.conf; location / { try_files $uri $uri/ /index.php$is_args$args; } location /phpmemcachedadmin { alias /srv/http/phpmemcachedadmin; } location /webgrind { alias /srv/http/webgrind; } } while / works well, im getting a 404 for /webgrind and /phpmemcachedadmin. If i switch the root directory to /srv/http and alias the / location, die /phpmemcachedadmin and webgrind work, but not the / location. UPDATE: I managed the probems getting all location to work, so here is the updated config #user html; worker_processes 2; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; gzip on; server { listen 80; server_name localhost; location / { root /srv/http/share; index index.php; try_files $uri $uri/ /index.php$is_args$args; include php.conf; } location /phpmemcachedadmin { root /srv/http; index index.php; try_files $uri $uri/ /index.php$is_args$args; include php.conf; } location /webgrind { root /srv/http; index index.php; try_files $uri $uri/ /index.php$is_args$args; include php.conf; } } } The php.conf looks like this: location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/run/php-fpm/php-fpm.sock; fastcgi_index index.php; include fastcgi.conf; } while the fastcgi.conf like this: fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param HTTPS $https if_not_empty; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; But there is a problem serving phpmemcachedadmin. If i call localhost/phpmemcachedadmin/index.php it works quite well (i get a log that i got served the file in access log). On the other hand, if i just call localhost/phpmemcachedadmin/ he serves me the file for download. Neither the error.log nor the access.log log anything when i get served the the file for download. Any ideas?

    Read the article

  • LDAP object class violation: attribute ou not allowed in suffix?

    - by Paramaeleon
    I am about to set up a LDAP directory. It is used as a tool to communicate user permissions from a web application to WebDav file system access, e.g. adding a user to the web platform shall allow login to the file system with the same credentials. There are no other usages intended. Following this German tutorial which encourages the use of the attributes c, o, ou etc. over dc, I configured the following suffix and root: suffix "ou=webtool,o=myOrg,c=de" rootdn "cn=ldapadmin,ou=webtool,o=myOrg,c=de" Server starts and I can connect to it by LDAP Admin, which reports “LDAP error: Object lacks”. Well, there aren’t any objects yet. I now want to create the root and admin elements from shell. I created an init.ldif file: dn: ou=webtool,o=myOrg,c=de objectclass: dcObject objectclass: organization dc: webtool o: webtool dn: cn=ldapadmin,ou=webtool,o=myOrg,c=de objectclass: organizationalRole cn: ldapadmin Trying to load the file runs into an error, telling me that ou is not allowed: server:~ # ldapadd -x -D "cn=ldapadmin,ou=webtool,o=myOrg,c=de" -W -f init.ldif Enter LDAP Password: adding new entry "ou=webtool,o=myOrg,c=de" ldap_add: Object class violation (65) additional info: attribute 'ou' not allowed I am not using ou anywhere except in the suffix, so the question: Isn’t it allowed here? What is allowed here? Here is my answer. I am not allowed to post it as answer for 8 hours, so don’t mind that it is part of the question by now. I will move it outside some day, if I don’t forget to do so. There are numberous dependencies for the creation of elements, and error messages are rather confusing if you don’t know of the concept. The objectclass isn’t necessarily dcObject for the databases’ root node, as it is likely to guess when you read several tutoriales. Instead, it must correspond to the object’s type: Here, for a name starting with ou=, it must be organizationalUnit. I found this piece of information in these tables [Link removed due to restriction: Oops! Your edit couldn't be submitted because: We're sorry, but as a spam prevention mechanism, new users can only post a maximum of two hyperlinks. Earn more than 10 reputation to post more hyperlinks. Link is below]. Further on, the object class dictates which properties must and can be added in the record. Here, organizationalUnit must have an ou: entry and must not have neither dc: nor o: entry. The healthy init.ldif file looks like that: dn: ou=webtool,o=myOrg,c=de objectclass: organizationalUnit ou: LDAP server for my webtool dn: cn=ldapadmin,ou=webtool,o=myOrg,c=de objectclass: organizationalRole cn: ldapadmin Note: The page also states: “While many objectClasses show no MUST attributes you must (ouch) follow any hierarchy […] to determine if this is the really case.” I thought that would mean my root record would have to provide the must fields for c= and o= (c: and o:, respectively) but this isn’t the case. Link in answer is (1): http :// www (dot) zytrax (dot) com/books/ldap/ape/ "Appendix E: LDAP - Object Classes and Attributes"

    Read the article

  • Yahoo is sending our server's transactional email to the Spam folder, even though we have set up SPF and DKIM

    - by Derrick Miller
    Yahoo Mail is sending our server's transactional emails to the Spam folder, even though we have taken quite a few anti-spam steps. By contrast, Gmail allows the messages through to the inbox just fine. Here are the things which are in place: SPF is set up for the domain holsteinplaza.com. Yahoo reports spf=pass in the message headers. DKIM is set up for the domain holsteinplaza.com. Yahoo reports dkim=pass in the message headers. We have a proper reverse DNS entry for the sending mail server. Name - IP matches IP - Name. Neither Domainkeys nor SenderID are set up. From what I can tell, DKIM is the way of the future, and there is not much to be gained from adding Domainkeys or SenderID. Following are the headers. Any ideas what more I should do to get Yahoo to stop flagging the emails as spam? From Holstein Plaza Auctions Sat Jun 25 18:30:08 2011 X-Apparently-To: [email protected] via 98.138.90.132; Sat, 25 Jun 2011 18:30:11 -0700 Return-Path: <[email protected]> X-YahooFilteredBulk: 70.32.113.42 Received-SPF: pass (domain of holsteinplaza.com designates 70.32.113.42 as permitted sender) X-YMailISG: i_vaA_QWLDuLOmXhDjUv3aBKJl5Un6EiP6Yk2m4yn3jeEuYK MkhpqIt9zDUbHARCwXrhl9pqjTANurGVca7gytSs.mryWVQcbWBx.DaItWRb VcyrIzwMzXKCSeu06H2a.cJ7HG5vJLJaKmHUUI_1ttXKn_Aegiu5yHvFX83R Lpth0witO9zfaKvOMaJV3LAxpIpFOydwvq1cqjZ8nURxQbxM3Cl.QW7MxxrC 09qLVn_D_xSdU94QdU22IsVmlaRHv.uU5dnIazu.KSkhKpYykDoZA2SH0SY4 JmTZj3LP8N926xXVDzYQ5K6QvKuJL5g0d9pYZx3KC59sgIu5oHlJ3Q15RdKb f3OJw0PR6oIyJ2yStVr8vfbDgOfj3qig03.Tw6g6MMNpv1G7Cuol4oJeUaYP xELxX6dHgBgCSuWMcbsrxbK4BIXcS2qhpMqYQ4Isk.XXyA8uvmFXyvgc1ds5 8jo0rW.Wsw.55Z.KTPaQ0gHXj0T3OGppYMELSJv1iuhPyyAnZpmq01CU0Qd5 CcRgdyW3HaqhmpXqJCS0Clo16zXA4HmAjR0tgIQrHRLc3D9N02AOzvmDgCb1 vCh0p00QeKVq8UNkcShPRxZFKi9khtkLhPBlXEKkhJ76zyDmHUxTY.dQHVVD 8D2hx7BxbqI9DINI8x5oR5Q8hYkZqHYQsmGNkaU77O2BnsEv5WxMEmzrBJ4Z h8zGCidgYPiZycZfnfaBp0Xb4tya2WMTN45W02JFcO1qq_UMJ9xPeqZhPEj. j9YvBAC8324GGF.c8eWcNB2VB34QHgTcVUl3.c0XUCuncls9Cyg4L7AoIdCi HvAklSzDDu9nW6732VEipV9FJ_JkDupDNQU2hfiPG.3OeF8GwTnVYnEn0EiZ aO0NCnZhXuLDcN3K7ml3846yRdASvzPFs9s4aJkzR0FkhVvptiMBEOdRkKdG wHWmvWpK4GTZpW4yU7CnKpW2MiWWn1MP0h_CCZFKs5.3mfmfPjPVIABN_RuU Q8ex5hdKnKlQiqK56LzcPRnYmNtrwdsUX9CYn9d6cPpXR_Bi5jrNJMNzdFvq lGO0CBT4QPe2V45U8PtpMitttuDA1cCvmyBPFswxNlL0jyX0a_W.vl0YW5.d HhDItpHhDxKRUscM28IR.exetq4QCzyM X-Originating-IP: [70.32.113.42] Authentication-Results: mta1267.mail.ac4.yahoo.com from=holsteinplaza.com; domainkeys=neutral (no sig); from=holsteinplaza.com; dkim=pass (ok) Received: from 127.0.0.1 (EHLO predator.axis80.com) (70.32.113.42) by mta1267.mail.ac4.yahoo.com with SMTP; Sat, 25 Jun 2011 18:30:11 -0700 Received: (qmail 1440 invoked by uid 48); 25 Jun 2011 21:30:09 -0400 To: [email protected] Subject: this is a test X-PHPMAILER-DKIM: phpmailer.worxware.com DKIM-Signature: v=1; a=rsa-sha1; q=dns/txt; l=203; s=auction; t=1309051808; c=relaxed/simple; h=From:To:Subject; d=holsteinplaza.com; [email protected]; z=From:=20Holstein=20Plaza=20Auctions=20<[email protected]> |To:[email protected] |Subject:=20this=20is=20a=20test; bh=B3Tw5AQb1va627KEoazuFEBZ0fg=; b=oQ5uFq+oekPTGhszyIritjuuIAi3qPNyeitu+aWMhdx3oC6O2j5hJsDFpK0sS5fms7QdnBkBcEzT0iekEvn9EfAdCkGZ2KrtEC0yv7QKQcrjXxy07GJpj9nq0LYbgOuPdw8mGvKxlRZ+jFBX0DRJm0xXFLkr+MEaILw7adHTCCM= Date: Sat, 25 Jun 2011 21:30:08 -0400 From: Holstein Plaza Auctions <[email protected]> Reply-to: Holstein Plaza Auctions <[email protected]> Message-ID: <[email protected]> X-Priority: 3 X-Mailer: PHPMailer 5.1 (phpmailer.sourceforge.net) MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset="iso-8859-1" Content-Length: 195

    Read the article

  • What would cause an .exe to vanish without a trace?

    - by Peter pete
    I have a few computers. One computer, at home, one day suddenly had its pgbouncer.exe vanish. The antivirus didn't have it in its virus chest [avast]. I couldn't find the bgbouncer anywhere. All the other pgbouncer files remained where they used to be, except the exe had vanished. I hadn't uninstalled it, nor had anyone else used the machine. I hadn't installed any new software since the previous time I had used it either. Just now, however, my TV computer was running out of disk space, which was weird because I had python setup to do transcoding and archiving. I logged in and voila! python.exe had vanished !! Once again, Avast.exe didn't have it in its virus chest, and dunno! I do this time, with the TV computer, know exactly the date that python vanished. Sat the 18th my python scripts ran fine. Sat the 19th python was gone. I'm going to do some hunting in the event log to see what happened that day. But if anyone has experienced vanishings before and has a clue what happened, I would like to know. FYI: Both computers (bgbouncer vanish and python vanish) were running Win7 with the RDP hack and both on SSDs and both with Avast. Both computers had all windows updates set to manual(to prevent random stuff changing!) and neither had recently had any windows updates manually applied. FYI2: Tv computer had since the beginning of October dropbox running all the time trying to download two files. Sadly, a temporary download of each of these two files by dropbox resulted in Avast freaking out and virus-chesting them, and then dropbox downloading them partially again, before being dropboxed. Now, these two files were binaries from a program I had personally written and were clean on other machines. Since the python vanishing I have deleted these two binaries from dropbox (using the website) and dropbox exe on the tv computer is now at peace. I don't think this should cause python.exe to vanish though :/ New edit: On the 18/10/2013, at 0742 my python script ended with an error: "file still in use" which was unexpected but I shrugged it off since sometimes media portal doesn't release the recording. But on second thoughts is weird since the show in use would have been finished recording the day before. On the 18/10/2013 at 0807 the windows event log complains that several drivers required for CutePDF, send to onenote 2010, send to onenote 2013, microsoft xps document writer aren't installed. I just checked now and indeed those printers have vanished! New update! I found my python.exe that had been removed. It was still in the C:\Python33\ directory except it had been renamed to a random string charater.tmp (ie, it was made into a temp file) with a creation date of 19/10/2013 at 0600:02 am. Now, the computer normally wakes at 6am to do transcoding. What could have moved my python file into a tmp file?

    Read the article

  • Email forwarding from my domain to gmail - FAIL

    - by pitosalas
    [There are numerous similar questions on ServerFault but I couldn't find one that was exactly on point] Background: I use Gmail for my email client. My email is [email protected]. However the email that people communicate to me with is [email protected]. I run the server that hosts www.example.com and other domains, at ServerBeach. Up to yesterday, I had SENDMAIL painlessly just forward emails to [email protected] to [email protected] and everything was fine, for several years in fact. Suddenly my email stopped working - that is, my gmail account stopped receiving emails via the forward from my server. Looking into it I found a bunch of emails sitting on my server with content like this: ... while talking to gmail-smtp-in.l.google.com.: RCPT To: <<< 450-4.2.1 The user you are trying to contact is receiving mail at a rate that <<< 450-4.2.1 prevents additional messages from being delivered. Please resend your <<< 450-4.2.1 message at a later time. If the user is able to receive mail at that <<< 450-4.2.1 time, your message will be delivered. For more information, please <<< 450 4.2.1 visit xxxxxx://mail.google.com/support/bin/answer.py?answer=6592 u15si37138086qco.76 [email protected]... Deferred: 450-4.2.1 The user you are trying to contact is receiving mail at a rate that DATA <<< 550-5.7.1 [64.34.168.137 1] Our system has detected an unusual rate of <<< 550-5.7.1 unsolicited mail originating from your IP address. To protect our <<< 550-5.7.1 users from spam, mail sent from your IP address has been blocked. <<< 550-5.7.1 Please visit xxxxx://www.google.com/mail/help/bulk_mail.html to review <<< 550 5.7.1 our Bulk Email Senders Guidelines. u15si37138086qco.76 554 5.0.0 Service unavailable ... while talking to alt1.gmail-smtp-in.l.google.com.: From what I've been researching, I think somehow someone has/is hijacking my domain name or something and this somehow has caused gmail's servers to notice and cut me off. But I don't know really what's going on nor do I see whatever emails might be involved. I've read stuff on zoneedit.com that sounds like they might have a solution in their service for what I am trying to do. I also read a lot about admining DNS and SENDMAIL and tried various things, but nothing works. Can you tell from my description what is going on that caused GMail's server to stop accepting email from my server and is there a way to stop it? What is the 'correct' way to configure things so that emails to [email protected] behave as if they were sent to [email protected]? Thanks so much!

    Read the article

  • Can I make my drives visible and change their partition type without losing my data?

    - by user165408
    I have made a lot of mistakes and now I cannot see my hard disk nor I can start my operating system on my laptop. All my passwords and important files on my hdd without any backup. I followed this course of action Changed my hard disk partitions to dynamic just for getting 5th partition. (1st mistake) Decreased partitions to 4 again. Backed up operating system from 4th to 3rd partition with Norton Ghost. Booted from a live CD for Windows XP. Formatted 4th partition and moved my all important data from 1st and 2nd partitions to the 4th partition. Deleted 1st and 2nd partitions and got 1 partition from half of empty space. So I have just 3 partitions and empty space between 1st and 2nd partitions. Tried to install Windows 8 to the first partition but it did not allow because it is dynamic. Also it did not allow to install to other partitions. Tried to install Windows XP to the 1st partition but it said if I continue I cannot use other drivers. Therefore I escaped from installing it. Booted from the Windows XP live CD then increased 1st partiton to less than 400mb of empty space. Therefore I thought it will be adjacent but it was shown as 2 partitions. In my computer I see just 3 drivers. Using Norton Ghost I recovered my OS to the 1st partition. (2nd mistake it was on 4th partition originally) Booted from a Windows XP live CD I tried to install bcdedit to the Windows XP live CD but it did not work. Then I tried to install EaseUS Partition Master Home Edition. It was installed with errors then I start it and it showed me an error like there is no hard disk. I looked to my PC and my drivers were not there. Booted from the Norton Ghost CD and it did not show me my drivers either, but before I was able to see them. I checked numbers of partition shown by the Norton Ghost utility and they are still have same numbers so I have to see my drivers but I cannot see them now. My hard disk is shown as extarnal dynamic now so I cannot see any drive in my PC in the live Windows XP. There are two options; first one is import extarnal disk and second one is convert disk to basic. Will they delete my data? I fear booting from CDs like Windows XP live CD, Norton Ghost CD, and the operating system CD/DVD, because they may overwrite a few MB their data to my data. These recover tools are already exist in Windows XP live CD by The Ultimate Boot CD for Windows. Can any of them help me? CompuAppa SwissKnife V3 DBXtract Disk Investigator Fab's AutoBackup 2.0 FileRecovery Floppy Repair Free Undelete Handy Recovery Recovery Manager Restorastion Restorastion Help File by UBCD4Win UnChk Unstoppable Copier Finally How can I make it so that my drives are visible again without losing my data? How can I convert my dynamic partitions to basic without losing my data?

    Read the article

  • Why are USB 2.0 devices crashing my system?

    - by BenAlabaster
    Background on the machine I'm having a problem with: The machine was inherited and appears to be circa 2003 (there's a date stamp on the power supply which leads me to this conclusion). I've got it set up as a Skype terminal for my 2 year old to keep in touch with her grandparents and other members of the family - which everyone loves. It has a generic baby-ATX motherboard with no identifying markings. CPU-Z identifies the motherboard model as VT8601 but doesn't provide me with any manufacturer name. There's one stamp on the motherboard that says "Rev.B". On board it has 10/100 LAN, 2 x USB 1.0, VGA, PS/2 for KB and mouse, parallel port, 2 x serial ports, 2 x IDE, 1 x floppy, 2 x SDRAM slots, 1 x CPU housing that is seating a 1.3GHz Intel Celeron CPU, 3 x PCI, 1 x AGP - although you can only use 2 of the PCI slots if you use the AGP slot due to the physical layout of the board. It's got 768Mb PC133 SDRAM - 1 x 512Mb & 1 x 256Mb installed as well as a D-LINK WDA-2320 54G Wi-Fi network card and a generic USB 2.0 expansion board containing 3 x external + 1 x internal USB connectors. All this is sitting in a slimline case. I don't know the wattage of the PSU, but can post this later if this proves to be helpful. The motherboard is running a version of Award BIOS for which I don't have the version number to hand but can again post this later if it would be helpful. It has an 80Gb Western Digital hard drive freshly formatted and built with Windows XP Professional with Service Pack 3 and all current patches. In addition to Windows XP, the only other software it's running is Skype 4.1 (4.2 crashes the machine as soon as it starts up). It's got a Daytek MV150 15" touch screen running through the VGA and COM1 with the most current drivers from the Daytek website and the most current version of ELO-Touchsystems drivers for the touch component. The webcam is a Logitech Webcam C200 with the latest drivers from the Logitech website. The problem If I hook any USB 2.0 devices to this machine, it hangs the whole machine and I have to hard boot it to get it back up. Workarounds found I can plug the same devices into the on board USB 1.0 connectors and everything works fine, albeit at reduced performance. I've tried 3 different kinds of USB thumb drives, 3 different makes/models of webcams and my iPhone all with the same effect. They're recognized and don't hang the machine when I hook them to the USB 1.0 but if I hook them to the USB 2.0 ports, the machine hangs within a couple of seconds of recognizing the devices were connected. Attempted solutions I've tried disabling all the on board devices that I'm not using - such as the on board LAN, the second COM port, the AGP connector etc. through the BIOS in an (perhaps misguided or futile) attempt to reduce the power consumption... I don't think it had any effect but it didn't do any harm. I was wondering if the PSU wattage just isn't enough to drive the USB 2.0 devices; I've seen this suggested but haven't found any confirmation that this could really be an issue - nor have I found a way to work around this issue - if indeed it is one. Any ideas? The only thing I haven't done which I only just thought of while writing this essay is trying the USB 2.0 card in a different PCI slot, or re-ordering the wi-fi and USB cards in the slots... although I'm not sure if this will make any difference. I've installed the USB card in another machine and it works without issue, so it's not a problem with the USB card itself. Other thoughts Perhaps this is an incompatibility between the USB 2.0 card and the BIOS, would re-flashing the BIOS with a newer version help? Do I need to be able to identify the manufacturer of the motherboard in order to be able to find a BIOS edition specific for this motherboard or will any version of Award BIOS function in its place? Question Does anyone have any ideas that could help me get my USB 2.0 devices hooked up to this machine?

    Read the article

  • Where is my VMware-ws FreeNAS CIFS(ZFS) bottle-neck?

    - by maka
    Background: I'm building a quiet HTPC + NAS that is also supposed to be used for general computer usage. I'm so far generally happy with things, it was just that I was expecting a little better IO performance. I have no clue if my expectations are unreal. The NAS is there as a general purpose file storage and as a media server for XBMC and other devices. ZFS is a requirement. Question: Where is my bottle-neck, and is there anything I can do config wise, to improve my performance? I'm thinking VM-disk settings could be something but I really have no idea where to go since I'm neither experienced with FreeNAS nor VMware-WS. Tests: When I'm on the host OS and copy files (from the SSD) to the CIFS share, I get around 30 Mbytes/sec read and write. When I'm on my laptop laptop, wired to the network, I get about the same specs. The test I've done are with a 16 GB ISO, and with about 200 MB of RARs and I've tried avoiding the RAM-cache by reading different files than the ones I'm writing ( 10 GB). It feels like having less CPU cores is a lot more efficient, since the resource manager in Windows reports less CPU-usage. With 4 cores in VMware, CPU usage was 50-80%, with 1 core it was 25-60%. EDIT: HD ActiveTime was quite high on SSD so I moved the page file, disabled hibernate and enabled Win DiskCache both on SSD and RAID. This resulted in no real performance difference for one file, but if i transferred 2 files the total speed went up to 50 Mbytes/s vs ~40. The ActiveTime avg also went down a lot (to ~20%) but has now higher bursts. DiskIO is on ~ 30-35 Mbytes/s avgs, with ~100Mb bursts. Network is on 200-250Mbits/s with ~45 active TCP connections. Hardware Asus F2A85-M Pro A10-5700 16GB DDR3 1600 OCZ Vertex 2 128GB SSD 2x Generic 1tb 7200 RPM drives as RAID0 (in win7) Intel Gigabit Desktop CT Software Host OS: Win7 (SSD) VMware Worksation 9 (SSD) FreeNAS 8.3 VM (20GB VDisk on SSD) CPU: I've tried 1, 2 and 4 cores. Virtualisation engine, Preferred mode: Automatic 10,24Gb ram 50Gb SCSI VDisk on the RAID0, VDisk is formatted as ZFS and exposed through CIFS through FreeNAS. NIC Bridge, Replicate physical network state Below are two typical process print-outs while I'm transfering one file to the CIFS share. last pid: 2707; load averages: 0.60, 0.43, 0.24 up 0+00:07:05 00:34:26 32 processes: 2 running, 30 sleeping Mem: 101M Active, 53M Inact, 1620M Wired, 2188K Cache, 149M Buf, 8117M Free Swap: 4096M Total, 4096M Free PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND 2640 root 1 102 0 50164K 10364K RUN 0:25 25.98% smbd 1897 root 6 44 0 168M 74808K uwait 0:02 0.00% python last pid: 2746; load averages: 0.93, 0.60, 0.33 up 0+00:08:53 00:36:14 33 processes: 2 running, 31 sleeping Mem: 101M Active, 53M Inact, 4722M Wired, 2188K Cache, 152M Buf, 5015M Free Swap: 4096M Total, 4096M Free PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND 2640 root 1 76 0 50164K 10364K RUN 0:52 16.99% smbd 1897 root 6 44 0 168M 74816K uwait 0:02 0.00% python I'm sorry if my question isn't phrased right, I'm really bad at these kind of things, and it is the first time I post here at SU. I also appreciate any other suggestions to something, I could have missed.

    Read the article

  • BSOD Code 16, artifacts all over the place (gtx 260)

    - by belinea
    I have a following, quite dated rig E8400 Core 2 Duo cpu Intel Dragontail Peak DP35DP motherboard on Intel Bearlake P35 chipset 4GB ram Geforce 260gtx Corsair 650W PSU Windows 7 64 bit The following things has happened in the last few days. I first decided to update my Nvidia drivers to the latest version. That was 4 days ago. PC worked fine for 2 days and I was able to play few games as well without any problems. Then 2 days ago a first crash happened while playing the new XCOM game. BSOD code 16. Just the blue screen, no artifacts. PC rebooted and worked well again, I continued playing this game for another 2 hours and went to sleep. Next evening I tried to play some BF3 multiplayer (use to play on LOW settings). Approx. 10 minutes into the game red/pink-ish artifacts appeared on the screen and game quit to desktop. Restarted the game and another 3-4 minutes afterwards another crash to desktop but this time followed shortly by BSOD Code 16. From that moment I started to seeing artifacts on random startups, including Windows loading screen and the BIOS itself. Windows would still load but soon enough it would BSOD on a simple task like opening a Internet browser. Today I get tons of artifacts (little small red dashes all over the screen) on BIOS, loading screen, normal Windows mode as well as safe mode. I suspect it wouldn't be drivers but I tried removing and sweeping them entirely in the safe mode. PC would still start with artifacts all over the place but would load the normal mode, just without the driver, in the default lowest resolution. As soon as proper Nvidia drivers are installed though and PC rebooted, Windows doesn't load at all as BSOD now appears on loading screen. However, again, if I go to safe mode and remove drivers, normal mode launches fine. So obviously crash happens only on high resolution. I opened my machine this morning and gave it a proper cleaning even though it wasn't heavily dusted. It didn't help and number of artifacts seems to increase with every PC restart. I write these words in safe mode, which works, but I have to look through all the red dots and dashes. I don't have built in GPU chipset so I can't try removing my Geforce card nor can I borrow A GPU from anyone else. What are my options? I was looking into getting a completely new rig around Christmas so I'm not freaking out about this. If everything points to hardware issue I may simply decide to get the new machine earlier and don't bother with fixing this one. However it would be great to learn more if this is indeed situation that has slim chances of getting sorted. I realize BSOD Code 16 is rather popular topic online but every story seems a bit different and there can be number of issues with it. Hence a new thread.

    Read the article

  • "ID 046d:c50e Logitech, Inc. Cordless Mouse Receiver" wheel-click is wrong

    - by sputnick
    I use this mouse under archlinux x86_64 with 3.2.8-1-ARCH kernel. I have some problems to select and then paste with the wheel-click in some applications like konversation, not in a terminal nor an editor. I don't know if it's a hardware problem or a software one. $ lsusb -v Bus 002 Device 110: ID 046d:c50e Logitech, Inc. Cordless Mouse Receiver Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 8 idVendor 0x046d Logitech, Inc. idProduct 0xc50e Cordless Mouse Receiver bcdDevice 25.10 iManufacturer 1 Logitech iProduct 2 USB RECEIVER iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 34 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xa0 (Bus Powered) Remote Wakeup MaxPower 70mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 3 Human Interface Device bInterfaceSubClass 1 Boot Interface Subclass bInterfaceProtocol 2 Mouse iInterface 0 HID Device Descriptor: bLength 9 bDescriptorType 33 bcdHID 1.11 bCountryCode 0 Not supported bNumDescriptors 1 bDescriptorType 34 Report wDescriptorLength 95 Report Descriptors: ** UNAVAILABLE ** Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0008 1x 8 bytes bInterval 10 Device Status: 0x0000 (Bus Powered) When I see what's happens in xev, the output is different compared to another mouse My buggy Logitech mouse : ButtonPress event, serial 40, synthetic NO, window 0x4400001, root 0x15a, subw 0x4400002, time 170350700, (48,52), root:(1491,75), state 0x10, button 11, same_screen YES EnterNotify event, serial 40, synthetic NO, window 0x4400001, root 0x15a, subw 0x0, time 170350700, (48,52), root:(1491,75), mode NotifyGrab, detail NotifyInferior, same_screen YES, focus YES, state 16 KeymapNotify event, serial 40, synthetic NO, window 0x0, keys: 90 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ButtonPress event, serial 40, synthetic NO, window 0x4400001, root 0x15a, subw 0x4400002, time 170350716, (48,52), root:(1491,75), state 0x10, button 6, same_screen YES ButtonRelease event, serial 40, synthetic NO, window 0x4400001, root 0x15a, subw 0x4400002, time 170350716, (48,52), root:(1491,75), state 0x10, button 6, same_screen YES ButtonRelease event, serial 40, synthetic NO, window 0x4400001, root 0x15a, subw 0x4400002, time 170350988, (48,52), root:(1491,75), state 0x10, button 11, same_screen YES LeaveNotify event, serial 40, synthetic NO, window 0x4400001, root 0x15a, subw 0x0, time 170350988, (48,52), root:(1491,75), mode NotifyUngrab, detail NotifyInferior, same_screen YES, focus YES, state 16 a working mouse (dell) : ButtonPress event, serial 40, synthetic NO, window 0x4400001, root 0x15a, subw 0x4400002, time 170245131, (46,32), root:(1489,55), state 0x10, button 2, same_screen YES EnterNotify event, serial 40, synthetic NO, window 0x4400001, root 0x15a, subw 0x0, time 170245131, (46,32), root:(1489,55), mode NotifyGrab, detail NotifyInferior, same_screen YES, focus YES, state 528 KeymapNotify event, serial 40, synthetic NO, window 0x0, keys: 90 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ButtonRelease event, serial 40, synthetic NO, window 0x4400001, root 0x15a, subw 0x4400002, time 170245411, (46,32), root:(1489,55), state 0x210, button 2, same_screen YES LeaveNotify event, serial 40, synthetic NO, window 0x4400001, root 0x15a, subw 0x0, time 170245411, (46,32), root:(1489,55), mode NotifyUngrab, detail NotifyInferior, same_screen YES, focus YES, state 16 A demo of the problem when I use konversation (IRC) : http://www.youtube.com/watch?v=lhmr92M7NCc I tried to modify the button map with xmodmap like this with no success (one at a time) : xmodmap -e "pointer = 1 0 3" xmodmap -e "pointer = 1 1 3" xmodmap -e "pointer = 1 2 3" xmodmap -e "pointer = 1 3 3" xmodmap -e "pointer = 1 4 3" xmodmap -e "pointer = 1 5 3" xmodmap -e "pointer = 1 6 3" xmodmap -e "pointer = 1 7 3" xmodmap -e "pointer = 1 8 3" xmodmap -e "pointer = 1 9 3" xmodmap -e "pointer = 1 10 3" xmodmap -e "pointer = 1 11 3" xmodmap -e "pointer = 1 12 3" xmodmap -e "pointer = 1 13 3" xmodmap -e "pointer = 1 14 3" xmodmap -e "pointer = 1 15 3" xmodmap -e "pointer = 1 16 3" xmodmap -e "pointer = 1 17 3" xmodmap -e "pointer = 1 18 3" xmodmap -e "pointer = 1 19 3" xmodmap -e "pointer = 1 20 3" xmodmap -e "pointer = 1 21 3" xmodmap -e "pointer = 1 22 3" xmodmap -e "pointer = 1 23 3" xmodmap -e "pointer = 1 24 3" xmodmap -e "pointer = 1 25 3" Any clue ? I would like to avoid buying a new mouse just for a paste problem.

    Read the article

  • KVM with one host IP and a different subnet for machines

    - by Jguy
    I've already setup a KVM host with proper IP configurations, but my host had me create DHCP and use that to assign the IP's to the machines. I want to see if there's an easier way to do it (or better). Upon my first setting out on this, I didn't find anything that pointed me in the right direction. I'm coming off a fresh install of Debian 6.0 x64, so I have nothing installed. I've logged in, queried for the below information and changed the password from my host set one. I have a Debian 6.0 x64 system with the following initial network configuration (substituted 255 in place of my real first octave): # tail /etc/network/interfaces auto eth0 iface eth0 inet static address 255.9.24.80 broadcast 255.9.24.95 netmask 255.255.255.224 gateway 255.9.24.65 # default route to access subnet up route add -net 255.9.24.64 netmask 255.255.255.224 gw 255.9.24.65 eth0 I have a /29 subnet that I want the virtual machines to use from my host: IP: 255.46.187.152 /29 Mask: 255.255.255.248 Broadcast: 255.46.187.159 Usable IP addresses: 255.46.187.153 to 255.46.187.158 I like the interface of Cloudmin, so I want to try and use that if I can to administrate my guests. So, my questions: How do I set this up on the host system the best so that I can use the additional Subnet IP's on the guests and have them accessible from the internet? I also need to host a DNS server, which means one of these VM's has to have two IP's assigned to it and accessable from the outside world. How can I do that using Cloudmin? I had a question about this here: Multiple IP addresses assigned to one KVM VM But I just reformatted the entire server and am trying to figure out a better way of doing this. Machine information: # ip route show 255.9.24.64/27 via 255.9.24.65 dev eth0 255.9.24.64/27 dev eth0 proto kernel scope link src 255.9.24.80 default via 255.9.24.65 dev eth0 brctl is empty # ip addr list 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether c8:60:00:54:b5:d8 brd ff:ff:ff:ff:ff:ff inet 255.9.24.80/27 brd 255.9.24.95 scope global eth0 inet6 fe80::ca60:ff:fe54:b5d8/64 scope link valid_lft forever preferred_lft forever Thank you for any help you can provide me. EDIT: I've installed kvm and cloudmin: aptitude install qemu-kvm libvirt-bin wget http://cloudmin.virtualmin.com/gpl/scripts/cloudmin-kvm-debian-install.sh ./cloudmin-kvm-debian-install.sh Rebooted and now my network configuration looks like this: # device: eth0 iface eth0 inet manual # default route to access subnet iface br0 inet static address 255.9.24.80 netmask 255.255.255.224 broadcast 255.9.24.95 network 255.9.24.64 bridge_ports eth0 gateway 255.9.24.65 I setup in Cloudmin the Start IP as 255.46.187.153 and End IP as 255.46.187.158. The CIDR is 29 and the gateway is 255.46.187.152. I've installed a guest with ubuntuserver 12.04 x64, which was able to get and retrieve internet resources during installation, but now cannot reach anything nor can it be reached from anything. Its network configuration is: iface eth0 inet static address 255.46.187.153 netmask 255.255.255.224 broadcast 255.46.187.159 gateway 255.46.187.152 dns-nameservers <host provided nameservers> And is not able to ping google.com through DNS or direct IP, I can't ping the VM from the outside or the host. any ideas now?

    Read the article

  • USB connection is unstable with Nexus S 2.3.4 on AMD 64 running 64-bit Windows 7, but works with 32-bit Windows Vista

    - by Mike
    The USB connection is unstable with Nexus S (Android 2.3.4) on AMD 64 running 64-bit Windows 7, but it works with 32-bit Windows Vista. Problem Description: On the 64-bit Windows 7 machine my Nexus S appears to connect, but then it disconnects moments later. Neither accessing USB storage or loading an Android application package file (APK) using the Android Debug Bridge (ADB) work. On 32-bit Windows Vista using the same USB cable, USB storage works. I haven't tried the ADB on 32-bit Windows Vista. Reproduction steps for USB storage: (I have provided the reproduction steps for USB storage and not ADB, because if one isn't working, then the other isn't working either and the USB storage reproduction steps are shorter to document.) Connect the USB cable to the Nexus S and my Windows 7 machine. Effect: The "USB Mass Storage, USB Connected" dialog appears with the button "Turn on USB storage." Click "Turn on USB Storage" Effect: The "working circle" appears. A dialog briefly appears saying "USB storage in use," then it either returns me to Step 1 (now that I am running 2.3.4) or is replaced with the Nexus S's application homepage (while I was running 2.3.3). I'm not sure if the version matters, but I mention it for completeness. On the 32-bit Windows Vista machine the connection is stable. I am able to navigate through the Nexus S file system create, read, update, and delete files, etc. I haven't tried connecting with the ADB. Troubleshooting summary: Tried and failed: Uninstalling and reinstalling the Android USB drivers including removing the files. Uninstalling my custom software Pulling the Nexus S's battery Restarting the Nexus S Restarting 64-bit Windows 7 Changing USB ports on the 64-bit Windows 7 box Compared the dates and file size on the DLLs in my google-usb_driver\amd64 directory and the windows\System32 directory. They match. The sizes for the google-usb_driver\i386 directory do not match (expected). Turning off Debugging mode on the Nexus S does not resolve the problem. Searching Google. Tried and succeeded: Connecting to another machine (Windows Vista) using the same USB cable and Nexus S phone. Troubleshooting observations: I notice that uninstalling the device drivers and deleting the files, then reinstalling the drivers, then rebooting 64-bit Windows 7 then unplugging the Nexus S, then plugging it back in occasionally helps for a short amount of time (minutes to hours, not days). When it is working, I can both access the Nexus S's drive and load/test applications using the ADB. I have observed some wonky behavior in the Device Manager that I haven't tracked down. Sometimes the black Nexus S image appears in the list of devices. Sometimes the image displays as a computer with a green ISA card. Sometimes it neither appears on the top level of devices nor under “other devices,” but it does appear under "disk drives" as "Android UMS Composite USB Device." System configuration: The Nexus S is running Android OS 2.3.4's "Settings\about phone\System updates" indicates that it is up to date as of May 21st 2011. Both 32-bit Windows Vista and 64-bit Windows 7 are up to date. The Windows Vista system is running on an Intel 32-bit processor. Windows 7 is running on an AMD 64-bit processor. I have done Android development on both systems, but I usually develop on the 64-bit Windows 7 machine.

    Read the article

  • SQL Server Log File Won't Shrink due cause "log are pending replication" on non replicated DB?

    - by user796466
    I have a non Mission Critial DB 9am-5pm SQL Server database that I have set up to do nightly full backups and log backups every 30 minutes during business hours. The database is in full recovery and normally I have no reason to truncate/shrink logs unless I do some heavy maintenance. Log backups manage the size with no issue. However I have not been at this client for several weeks and upon inspection I noticed that the log had grown to about 10 times the size of the .mdf file. I poked around backups had been running and I had not gotten any severity error alerts (SQL mail). I attempted to put DB in simple recovery and shrink the log, this was no good. I precede to try a log backup and I got: The log was not truncated because records at the beginning of the log are pending replication or Change Data Capture. Ensure the Log Reader Agent or capture job is running or use sp_repldone to mark transactions as distributed or captured. Restart SQL Server rinse repeat same thing ... I said ??? Replication is not nor ever has been set up on this DB or database /server ??? So the log backups have not been flushing the .ldf. So I did a couple hours of research and I found: http://www.sqlmonster.com/Uwe/Forum.aspx/sql-server/5445/Log-file-is-not-truncated-inspite-of-regular-log-backup http://www.eggheadcafe.com/software/aspnet/30708322/the-log-was-not-truncated-because-records-at-the-beginning-of-the-log-are-pending-replication.aspx seems to be some kind of poorly documented bug ?? The solution seems to have been to run exec sp_repldone, more precisley EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time= 0, @reset = 1 This procedure can be used in emergency situations to allow truncation of the transaction log when transactions pending replication are present. Using this procedure prevents Microsoft SQL Server 2000 from replicating the database until the database is unpublished and republished. ~ MSDN When I do that I get the following Msg 18757, Level 16, State 1, Procedure sp_repldone, Line 1 Unable to execute procedure. The database is not published. Execute the procedure in a database that is published for replication. Which makes sense Because the DB has never been published for replication. I have several questions: A) First and foremost is, WTF is going on ? What is causeing this, I am interested in knowing the why here ? Is this genuinley a bug or is there some aspect of the backup that is not functioning properly that cause's the DB to mimick a replicated state ? Someone please edify me on this. B) Second ... Do I really have to publish / replicate this DB to exec this SP to fix this ??? Sounds crazy or is there some T-SQL that I can put it in a published state exec the proc and be on my way ... C) Third, if I do indeed have to publish this database to exec the SP to release this unneeded mis replicated/intended log , to get my .ldf file and backup back on track. How do I publish the database without an online host that it is asking for ??? I don't generally do this kind of database administration and need some guidance. Sorry if this is too verbose but just voicing the question helps me clarify it ... Thank you in advance for your help

    Read the article

  • Understanding 400 Bad Request Exception

    - by imran_ku07
        Introduction:          Why I am getting this exception? What is the cause of this error. Developers are always curious to know the root cause of an exception, even though they found the solution from elsewhere. So what is the reason of this exception (400 Bad Request).The answer is security. Security is an important feature for any application. ASP.NET try to his best to give you more secure application environment as possible. One important security feature is related to URLs. Because there are various ways a hacker can try to access server resource. Therefore it is important to make your application as secure as possible. Fortunately, ASP.NET provides this security by throwing an exception of Bad Request whenever he feels. In this Article I am try to present when ASP.NET feels to throw this exception. You will also see some new ASP.NET 4 features which gives developers some control on this situation.   Description:   http.sys Restrictions:           It is interesting to note that after deploying your application on windows server that runs IIS 6 or higher, the first receptionist of HTTP request is the kernel mode HTTP driver: http.sys. Therefore for completing your request successfully you need to present your validity to http.sys and must pass the http.sys restriction.           Every http request URL must not contain any character from ASCII range of 0x00 to 0x1F, because they are not printable. These characters are invalid because these are invalid URL characters as defined in RFC 2396 of the IETF. But a question may arise that how it is possible to send unprintable character. The answer is that when you send your request from your application in binary format.           Another restriction is on the size of the request. A request containg protocal, server name, headers, query string information and individual headers sent along with the request must not exceed 16KB. Also individual header should not exceed 16KB.           Any individual path segment (the portion of the URL that does not include protocol, server name, and query string, for example, http://a/b/c?d=e,  here the b and c are individual path) must not contain more than 260 characters. Also http.sys disallows URLs that have more than 255 path segments.           If any of the above rules are not follow then you will get 400 Bad Request Exception. The reason for this restriction is due to hack attacks against web servers involve encoding the URL with different character representations.           You can change the default behavior enforced by http.sys using some Registry switches present at HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\HTTP\Parameters    ASP.NET Restrictions:           After passing the restrictions enforced by the kernel mode http.sys then the request is handed off to IIS and then to ASP.NET engine and then again request has to pass some restriction from ASP.NET in order to complete it successfully.           ASP.NET only allows URL path lengths to 260 characters(only paths, for example http://a/b/c/d, here path is from a to d). This means that if you have long paths containing 261 characters then you will get the Bad Request exception. This is due to NTFS file-path limit.           Another restriction is that which characters can be used in URL path portion.You can use any characters except some characters because they are called invalid characters in path. Here are some of these invalid character in the path portion of a URL, <,>,*,%,&,:,\,?. For confirming this just right click on your Solution Explorer and Add New Folder and name this File to any of the above character, you will get the message. Files or folders cannot be empty strings nor they contain only '.' or have any of the following characters.....            For checking the above situation i have created a Web Application and put Default.aspx inside A%A folder (created from windows explorer), then navigate to, http://localhost:1234/A%25A/Default.aspx, what i get response from server is the Bad Request exception. The reason is that %25 is the % character which is invalid URL path character in ASP.NET. However you can use these characters in query string.           The reason for these restrictions are due to security, for example with the help of % you can double encode the URL path portion and : is used to get some specific resource from server.   New ASP.NET 4 Features:           It is worth to discuss the new ASP.NET 4 features that provides some control in the hand of developer. Previously we are restricted to 260 characters path length and restricted to not use some of characters, means these characters cannot become the part of the URL path segment.           You can configure maxRequestPathLength and maxQueryStringLength to allow longer or shorter paths and query strings. You can also customize set of invalid character using requestPathInvalidChars, under httpruntime element. This may be the good news for someone who needs to use some above character in their application which was invalid in previous versions. You can find further detail about new ASP.NET features about URL at here           Note that the above new ASP.NET settings will not effect http.sys. This means that you have pass the restriction of http.sys before ASP.NET ever come in to the action. Note also that previous restriction of http.sys is applied on individual path and maxRequestPathLength is applied on the complete path (the portion of the URL that does not include protocol, server name, and query string). For example, if URL is http://a/b/c/d?e=f, then maxRequestPathLength will takes, a/b/c/d, into account while http.sys will take a, b, c individually.   Summary:           Hopefully this will helps you to know how some of initial security features comes in to play, but i also recommend that you should read (at least first chapter called Initial Phases of a Web Request of) Professional ASP.NET 2.0 Security, Membership, and Role Management by Stefan Schackow. This is really a nice book.

    Read the article

  • Analytic functions – they’re not aggregates

    - by Rob Farley
    SQL 2012 brings us a bunch of new analytic functions, together with enhancements to the OVER clause. People who have known me over the years will remember that I’m a big fan of the OVER clause and the types of things that it brings us when applied to aggregate functions, as well as the ranking functions that it enables. The OVER clause was introduced in SQL Server 2005, and remained frustratingly unchanged until SQL Server 2012. This post is going to look at a particular aspect of the analytic functions though (not the enhancements to the OVER clause). When I give presentations about the analytic functions around Australia as part of the tour of SQL Saturdays (starting in Brisbane this Thursday), and in Chicago next month, I’ll make sure it’s sufficiently well described. But for this post – I’m going to skip that and assume you get it. The analytic functions introduced in SQL 2012 seem to come in pairs – FIRST_VALUE and LAST_VALUE, LAG and LEAD, CUME_DIST and PERCENT_RANK, PERCENTILE_CONT and PERCENTILE_DISC. Perhaps frustratingly, they take slightly different forms as well. The ones I want to look at now are FIRST_VALUE and LAST_VALUE, and PERCENTILE_CONT and PERCENTILE_DISC. The reason I’m pulling this ones out is that they always produce the same result within their partitions (if you’re applying them to the whole partition). Consider the following query: SELECT     YEAR(OrderDate),     FIRST_VALUE(TotalDue)         OVER (PARTITION BY YEAR(OrderDate)               ORDER BY OrderDate, SalesOrderID               RANGE BETWEEN UNBOUNDED PRECEDING                         AND UNBOUNDED FOLLOWING),     LAST_VALUE(TotalDue)         OVER (PARTITION BY YEAR(OrderDate)               ORDER BY OrderDate, SalesOrderID               RANGE BETWEEN UNBOUNDED PRECEDING                         AND UNBOUNDED FOLLOWING),     PERCENTILE_CONT(0.95)         WITHIN GROUP (ORDER BY TotalDue)         OVER (PARTITION BY YEAR(OrderDate)),     PERCENTILE_DISC(0.95)         WITHIN GROUP (ORDER BY TotalDue)         OVER (PARTITION BY YEAR(OrderDate)) FROM Sales.SalesOrderHeader ; This is designed to get the TotalDue for the first order of the year, the last order of the year, and also the 95% percentile, using both the continuous and discrete methods (‘discrete’ means it picks the closest one from the values available – ‘continuous’ means it will happily use something between, similar to what you would do for a traditional median of four values). I’m sure you can imagine the results – a different value for each field, but within each year, all the rows the same. Notice that I’m not grouping by the year. Nor am I filtering. This query gives us a result for every row in the SalesOrderHeader table – 31465 in this case (using the original AdventureWorks that dates back to the SQL 2005 days). The RANGE BETWEEN bit in FIRST_VALUE and LAST_VALUE is needed to make sure that we’re considering all the rows available. If we don’t specify that, it assumes we only mean “RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW”, which means that LAST_VALUE ends up being the row we’re looking at. At this point you might think about other environments such as Access or Reporting Services, and remember aggregate functions like FIRST. We really should be able to do something like: SELECT     YEAR(OrderDate),     FIRST_VALUE(TotalDue)         OVER (PARTITION BY YEAR(OrderDate)               ORDER BY OrderDate, SalesOrderID               RANGE BETWEEN UNBOUNDED PRECEDING                         AND UNBOUNDED FOLLOWING) FROM Sales.SalesOrderHeader GROUP BY YEAR(OrderDate) ; But you can’t. You get that age-old error: Msg 8120, Level 16, State 1, Line 5 Column 'Sales.SalesOrderHeader.OrderDate' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause. Msg 8120, Level 16, State 1, Line 5 Column 'Sales.SalesOrderHeader.SalesOrderID' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause. Hmm. You see, FIRST_VALUE isn’t an aggregate function. None of these analytic functions are. There are too many things involved for SQL to realise that the values produced might be identical within the group. Furthermore, you can’t even surround it in a MAX. Then you get a different error, telling you that you can’t use windowed functions in the context of an aggregate. And so we end up grouping by doing a DISTINCT. SELECT DISTINCT     YEAR(OrderDate),         FIRST_VALUE(TotalDue)              OVER (PARTITION BY YEAR(OrderDate)                   ORDER BY OrderDate, SalesOrderID                   RANGE BETWEEN UNBOUNDED PRECEDING                             AND UNBOUNDED FOLLOWING),         LAST_VALUE(TotalDue)             OVER (PARTITION BY YEAR(OrderDate)                   ORDER BY OrderDate, SalesOrderID                   RANGE BETWEEN UNBOUNDED PRECEDING                             AND UNBOUNDED FOLLOWING),     PERCENTILE_CONT(0.95)          WITHIN GROUP (ORDER BY TotalDue)         OVER (PARTITION BY YEAR(OrderDate)),     PERCENTILE_DISC(0.95)         WITHIN GROUP (ORDER BY TotalDue)         OVER (PARTITION BY YEAR(OrderDate)) FROM Sales.SalesOrderHeader ; I’m sorry. It’s just the way it goes. Hopefully it’ll change the future, but for now, it’s what you’ll have to do. If we look in the execution plan, we see that it’s incredibly ugly, and actually works out the results of these analytic functions for all 31465 rows, finally performing the distinct operation to convert it into the four rows we get in the results. You might be able to achieve a better plan using things like TOP, or the kind of calculation that I used in http://sqlblog.com/blogs/rob_farley/archive/2011/08/23/t-sql-thoughts-about-the-95th-percentile.aspx (which is how PERCENTILE_CONT works), but it’s definitely convenient to use these functions, and in time, I’m sure we’ll see good improvements in the way that they are implemented. Oh, and this post should be good for fellow SQL Server MVP Nigel Sammy’s T-SQL Tuesday this month.

    Read the article

  • ParallelWork: Feature rich multithreaded fluent task execution library for WPF

    - by oazabir
    ParallelWork is an open source free helper class that lets you run multiple work in parallel threads, get success, failure and progress update on the WPF UI thread, wait for work to complete, abort all work (in case of shutdown), queue work to run after certain time, chain parallel work one after another. It’s more convenient than using .NET’s BackgroundWorker because you don’t have to declare one component per work, nor do you need to declare event handlers to receive notification and carry additional data through private variables. You can safely pass objects produced from different thread to the success callback. Moreover, you can wait for work to complete before you do certain operation and you can abort all parallel work while they are in-flight. If you are building highly responsive WPF UI where you have to carry out multiple job in parallel yet want full control over those parallel jobs completion and cancellation, then the ParallelWork library is the right solution for you. I am using the ParallelWork library in my PlantUmlEditor project, which is a free open source UML editor built on WPF. You can see some realistic use of the ParallelWork library there. Moreover, the test project comes with 400 lines of Behavior Driven Development flavored tests, that confirms it really does what it says it does. The source code of the library is part of the “Utilities” project in PlantUmlEditor source code hosted at Google Code. The library comes in two flavors, one is the ParallelWork static class, which has a collection of static methods that you can call. Another is the Start class, which is a fluent wrapper over the ParallelWork class to make it more readable and aesthetically pleasing code. ParallelWork allows you to start work immediately on separate thread or you can queue a work to start after some duration. You can start an immediate work in a new thread using the following methods: void StartNow(Action doWork, Action onComplete) void StartNow(Action doWork, Action onComplete, Action<Exception> failed) For example, ParallelWork.StartNow(() => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }, () => { workEndedAt = DateTime.Now; }); Or you can use the fluent way Start.Work: Start.Work(() => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }) .OnComplete(() => { workCompletedAt = DateTime.Now; }) .Run(); Besides simple execution of work on a parallel thread, you can have the parallel thread produce some object and then pass it to the success callback by using these overloads: void StartNow<T>(Func<T> doWork, Action<T> onComplete) void StartNow<T>(Func<T> doWork, Action<T> onComplete, Action<Exception> fail) For example, ParallelWork.StartNow<Dictionary<string, string>>( () => { test = new Dictionary<string,string>(); test.Add("test", "test"); return test; }, (result) => { Assert.True(result.ContainsKey("test")); }); Or, the fluent way: Start<Dictionary<string, string>>.Work(() => { test = new Dictionary<string, string>(); test.Add("test", "test"); return test; }) .OnComplete((result) => { Assert.True(result.ContainsKey("test")); }) .Run(); You can also start a work to happen after some time using these methods: DispatcherTimer StartAfter(Action onComplete, TimeSpan duration) DispatcherTimer StartAfter(Action doWork,Action onComplete,TimeSpan duration) You can use this to perform some timed operation on the UI thread, as well as perform some operation in separate thread after some time. ParallelWork.StartAfter( () => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }, () => { workCompletedAt = DateTime.Now; }, waitDuration); Or, the fluent way: Start.Work(() => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }) .OnComplete(() => { workCompletedAt = DateTime.Now; }) .RunAfter(waitDuration);   There are several overloads of these functions to have a exception callback for handling exceptions or get progress update from background thread while work is in progress. For example, I use it in my PlantUmlEditor to perform background update of the application. // Check if there's a newer version of the app Start<bool>.Work(() => { return UpdateChecker.HasUpdate(Settings.Default.DownloadUrl); }) .OnComplete((hasUpdate) => { if (hasUpdate) { if (MessageBox.Show(Window.GetWindow(me), "There's a newer version available. Do you want to download and install?", "New version available", MessageBoxButton.YesNo, MessageBoxImage.Information) == MessageBoxResult.Yes) { ParallelWork.StartNow(() => { var tempPath = System.IO.Path.Combine( Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData), Settings.Default.SetupExeName); UpdateChecker.DownloadLatestUpdate(Settings.Default.DownloadUrl, tempPath); }, () => { }, (x) => { MessageBox.Show(Window.GetWindow(me), "Download failed. When you run next time, it will try downloading again.", "Download failed", MessageBoxButton.OK, MessageBoxImage.Warning); }); } } }) .OnException((x) => { MessageBox.Show(Window.GetWindow(me), x.Message, "Download failed", MessageBoxButton.OK, MessageBoxImage.Exclamation); }); The above code shows you how to get exception callbacks on the UI thread so that you can take necessary actions on the UI. Moreover, it shows how you can chain two parallel works to happen one after another. Sometimes you want to do some parallel work when user does some activity on the UI. For example, you might want to save file in an editor while user is typing every 10 second. In such case, you need to make sure you don’t start another parallel work every 10 seconds while a work is already queued. You need to make sure you start a new work only when there’s no other background work going on. Here’s how you can do it: private void ContentEditor_TextChanged(object sender, EventArgs e) { if (!ParallelWork.IsAnyWorkRunning()) { ParallelWork.StartAfter(SaveAndRefreshDiagram, TimeSpan.FromSeconds(10)); } } If you want to shutdown your application and want to make sure no parallel work is going on, then you can call the StopAll() method. ParallelWork.StopAll(); If you want to wait for parallel works to complete without a timeout, then you can call the WaitForAllWork(TimeSpan timeout). It will block the current thread until the all parallel work completes or the timeout period elapses. result = ParallelWork.WaitForAllWork(TimeSpan.FromSeconds(1)); The result is true, if all parallel work completed. If it’s false, then the timeout period elapsed and all parallel work did not complete. For details how this library is built and how it works, please read the following codeproject article: ParallelWork: Feature rich multithreaded fluent task execution library for WPF http://www.codeproject.com/KB/WPF/parallelwork.aspx If you like the article, please vote for me.

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >