Search Results

Search found 17913 results on 717 pages for 'old school rules'.

Page 653/717 | < Previous Page | 649 650 651 652 653 654 655 656 657 658 659 660  | Next Page >

  • What would cause my SendMail server not to acknowledge receiving a TCP Sequence?

    - by Mike B
    My TCP/IP Stack knowledge is a little rusty so please bear with me.... I have a CentOS 5.7 server with SendMail and am having seeing intermittent timeout issues sending email (particularly larger email) to other remote domains. It doesn't happen with all attachments or recipient domains. Just some. After some extended troubleshooting, I think I've narrowed it down to TCP Sequences not being acknowledged. Here's a breakdown of the TCP session from a packet capture I collected directly on my MTA (fooMTA): Packet 1 - 11: Standard TCP handshake followed by initial SMTP conversation. No errors. Packet #12 Recipient MTA: TCP sequence 231. Ack 91. Packet #13 FooMTA: TCP sequence 91. Ack 305. Packet #14 FooMTA: TCP sequence 1115. Ack 305. Packet #15 Recipient MTA: TCP sequence 305. Ack 2495. Packet #16 FooMTA: TCP sequence 2495. Ack 305. Packet #17 FooMTA: TCP sequence 5255. Ack 305. Packet #18: Recipient MTA: TCP sequence 305. Ack 5255. Packet #19: FooMTA: TCP sequence 6635. Ack 305. Packet #20: FooMTA: TCP sequence 8015. Ack 305. Packet #21: Recipient MTA: TCP Sequence 305. Ack 8015. Packet #22: FooMTA: TCP Sequence 10775. Ack 305. Packet #23: FooMTA: TCP Sequence 13535. Ack 305. Packet #24: Recipient MTA: TCP sequence 305. Ack 10775 Packet #25: FooMTA: TCP Sequence 14915. Ack 305 It keeps going like this with my server still thinking it hasn’t received sequence 305… in response the remote side eventually retransmits its prior data thinking that it never arrived. Eventually the gap gets so large that no new data is sent and the remote MTA keeps retransmitting old stuff. This contributes to an exponential backoff and eventually the remote side gives up. What’s strange to me is that I see the “missing” TCP sequence (305 in this case) arriving back to my server (via a packet capture collected directly from fooMTA) So I don’t get why my server keeps asking for it. Could this be firewall related? What would be the next step in troubleshooting?

    Read the article

  • 20GB+ worth of emails in my /home what is a better solution for that?

    - by Skinkie
    My email storage requirements are outgrowing anything reasonable with respect to local mail storage. As we speak 99% of my home partition is filled with personal mail in Thunderbirds mail dirs. Needless to say, this is just painful, badly searchable and as history has proven me that backups work, but Thunderbird is capable of loosing a lot of mail very easily. Currently I have an remote IMAPS server (Dovecot) running for my daily mail, accessible from anywhere, which from my own practice works efficiently up to about 1000 emails. Then some archive directories should be used to move mail around. I have been looking into DBMail, but I wonder if I make my case worse or better which such solution. None of the supported database employ string deduplication or string compression out of the box, so is this going to help me with 20GB+ mail? What about falling back to a plain old IMAP server? A filesystem like ZFS would support stuff like GZIP transparently, which could help. Could someone share their thoughts? The 20GB mostly consists of mailinglists, and normal mail. Not things like attachments. To add some clarifications; As we speak, my mail is not server side indexed at all - only my new mail arrives at a remote IMAP server. It is all local storage from former POP3 accounts, local mirrored Gmail and IMAP accounts. In my perspective it is not Thunderbird that sucks, its fileformat that sucks. Regarding the 1000 mails. On the road I am using Alpine and MobileMail, quite happy with both of them, but some management is required to actually manage the mail. Sieve helps a lot with that, but browing through 10.000 e-mails is not fun, especially not on a mobile client. I am quite happy with Dovecot, never had any issues with it. I just wonder if this is the way to go. Or if there are any other better solutions. What my question is: what is the best practice solution that allows 20GB+ mails and is -on demand remotely accessible, easy to backup and archive worthy. It doesn't need to be available 24x7. The final approach I took was installing a local IMAP server (Dovecot), configured it for being my archive, using the following guide: http://en.gentoo-wiki.com/wiki/Dovecot/InstallThunderbird

    Read the article

  • Backup, Migrate or Clone Failing CentOS 4 (LVM)

    - by Hegelworm
    I've been running a BlueQuartz CentOS 4 system (Nuonce.net distro) for a few years now and although the hard drive (Deskstar) has always been a bit noisy, on a few recent occasions I've heard it having trouble spinning up. Basically, I want to clone this drive to a similar sized one (80 Gig). I've spent many hours reading upon dd, dd_rescue, rsync, clonezilla and LVM mirroring yet the sheer number of options and nightmarish accounts has left me frozen - unable to make an informed decision as to how to start. I've made a few attempts. dd failed after about 2 hours, as, although the drives appeared to be identical on the surface (ATA Seagate Barracudas, Thai not Chinese), the destination drive is slightly smaller. My most recent attempt involved using a Debian CD to format the new drive and then rsync-ing everything over and editing the new drive's grub and fstab to reflect the changes. No joy here either as I hadn't chosen LVM when partitioning the destination drive and it wouldn't boot. As you can probably tell, I'm out of my depth here and a panic-invoking mixture of caution and frustration has prompted me to sign up here. The server itself, although not strictly a production environment, has a very specific installation of Festival, LAME and ffMpeg and provides the back-end for a Text-to-Speech jQuery plugin that I've built over the last 2 years. I'm also planning to rebuild the whole TTS system on Debian as the existing CentOS system still has PHP4 etc. For now though, I'd really like to just shift everything over to a new drive. As this is my first post, please feel free to lay any house rules on me that I might've overlooked; I've been hovering around StackOverflow for a while now but have only just signed up. Many thanks. Update: Thanks for your responses so far - it's much appreciated and makes me feel a little more confident when I can double-check things here. I had the idea of doing a fresh install of CentOS (from the original disk) on the new drive so the partitions and LVM were all set up correctly (after disconnecting my source drive to prevent painful mistakes). I then booted into rescue mode from the same CD, and, to avoid a conflicting label, changed the /boot partition's label using e2label to /bootnew. I then changed the VolGroup name using lvm vgrename from VolGroup00 to VolGroup001. I could then boot with both drives in. After mounting the new drive (via its VolGroup001 alias) into /newhd, I rsync-ed over everything I could to the new drive, using -avr switches and backslashes. Like mentioned here. I then disconnected my original source drive again, booted from the liveCD again, changed back the boot partition label from /bootnew to /boot using e2label and then renamed the VolGroup back to VolGroup00. I then rebooted and it went through the familiar start-up routine only to not find a host of files in proc, usr, lib, var etc. The boot did complete but there were lots of red 'FAILS'. I could log in with my existing creds, but the network was kaput, I couldn't startX (desktop GUI) and there were also a few (a lot) of error messages pertaining to iptables. Back to square one. I naively thought I'd nailed it. Shall I just buy a bigger hard drive and attempt the dd route? I've read that this can mess with LVM setups and there's the added risk of working on two unmounted drives at once with a low-level tool. Thanks again.

    Read the article

  • Mouse Error Code 24. Windows 7

    - by Cj.
    I've had the same mouse for a while, and it's been working fine until one day, it started giving me a message about a device not working properly. I tried updating the drivers, and re-installing, I even deleted old drivers in case my computer should be a little confused. It never made a difference, and my mouse seemed to be working just fine despite getting the permanent error in my device manager, I looked it up several times online, but I never found anything I could actually use, when I go to official websites, I always get the same response "plug in so so into a different place - drivers - install silverlight before you can watch this tutorial, try it on a different machine". so I gave up on that. But now is where I have a real problem, lately, my little strange error evolved into a fullblown Error 24, and my mouse is starting to turn on and off randomely, especially when it is being used, but I do hear it go "badum..dadum" when I'm off doing something else. when I looked up error code 24, I really didn't find much other than it meaning: Code 24 This device is not present, is not working properly, or does not have all its drivers installed. (Code 24) Cause The device is installed incorrectly. The problem could be a hardware failure, or a new driver might be needed. Devices stay in this state if they have been prepared for removal. After you remove the device, this error disappears. But, I have tried uninstalling the device entirely several times, and it'll go right back to its previous state with error 24, and turning on and off randomely. what do I do? I cannot afford taking it to a repair place, I can't really afford a new mouse either, I refuse to buy cheap ones as I am a gamer, in need of more than 3 buttons, and a good grip is important. Could there possibly be some confusion in the registry? I do remember having gotten some early problems after I converted my vista to windows7. But I hardly dare going in there unless I'm 100% certain of what I'm going for, and I can honestly say I am at a loss here. Edit: it is a USB mouse we're talking about here. MX™518 Optical Gaming Mouse (logitech) Edit2: I am seeing no rupture, so it must be on the inside of my mouse, or inside the rubber, protecting the cable, that would be really inconvenient to search for

    Read the article

  • Why does Excel now give me already existing name range error on Copy Sheet?

    - by WilliamKF
    I've been working on a Microsoft Excel 2007 spreadsheet for several days. I'm working from a master template like sheet and copying it to a new sheet repeatedly. Up until today, this was happening with no issues. However, in the middle of today this suddenly changed and I do not know why. Now, whenever I try to copy a worksheet I get about ten dialogs, each one with a different name range object (shown below as 'XXXX') and I click yes for each one: A formula or sheet you want to move or copy contains the name 'XXXX', which already exists on the destination worksheet. Do you want to use this version of the name? To use the name as defined in destination sheet, click Yes. To rename the range referred to in the formula or worksheet, click No, and enter a new name in the Name Conflict dialog box. The name range objects refer to cells in the sheet. For example, E6 is called name range PRE on multiple sheets (and has been all along) and some of the formulas refer to PRE instead of $E$6. One of the 'XXXX' above is this PRE. These name ranges should only be resolved within the sheet within which they appear. This was not an issue before despite the same name range existing on multiple sheets before. I want to keep my name ranges. What could have changed in my spreadsheet to cause this change in behavior? I've gone back to prior sheets created this way and now they give the message too when copied. I tried a different computer and a different user and the same behavior is seen everywhere. I can only conclude something in the spreadsheet has changed. What could this be and how can I get back the old behavior whereby I can copy sheets with name ranges and not get any errors? Looking in the Name Manager I see that the name ranges being complained about show twice, once as scope Template and again as scope Workbook. If I delete the scope Template ones the error goes away on copy however, I get a bunch of #REF errors. If I delete the scope Workbook ones, all seems okay and the errors on copy go away too, so perhaps this is the answer, but I'm nervous about what effect this deletion will have and wonder how the Workbook ones came into existence in the first place. Will it be safe to just delete the Workbook name manager scoped entries and how might these have come into existence without my knowing it to begin with?

    Read the article

  • My VPS ubuntu server is very slow

    - by askmike
    I just installed a frech copy of Ubuntu 12.04 on my vps because my old installation was very slow, unfortunately this did not fix the problem. With slow I mean requests for my PHP websites take a long time, very slow (30 sec per request) to slow (3+ sec per request). When it's really bad SSH is also laggish. The websites are: askmike.org (pretty standard Wordpress) mvr.me (own PHP) slow? very slow: Here is a picture of loading a clean install of wordpress slow: here is a picture of loading a small PHP based website the vps The VPS has 256mb ram and an 25GB hdd. Besides serving the 2 small websites it isn't doing anything AFAIK. What have I installed Clean Ubuntu server 12.04 LAMP stack few things like git and nodejs (not using both) ossec (because I thought my server was getting hammered) munin What I already tried / done I installed munin so that I could watch io speed and such. The problem is that I don't know where to look for in the munin report. I checked logs and don't see anything strange (although I don't really know where to look for besides strange / repetitive errors and GET requests). I configured Apache MPM to: <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 40 MaxRequestsPerChild 0 </IfModule> (apache is using prefork, the default) Stats I copied the munin report as it appeared at 4:50 last night to a site hosted on a shared webhost. Note that tonight my mysql crashed somewhere after 1:00 (which is a new problem altogether), so therefor the graph for last night might look strange. Can anyone help me get my VPS up to normal speed? EDIT: Thanks for the replies. The VPS is 10 bucks a month and is from directvps.nl (Dutch host and I'm also dutch). I did two speed tests for disk IO: $ dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 1073741824 bytes (1.1 GB) copied, 23.1506 s, 46.4 MB/s $ dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 1073741824 bytes (1.1 GB) copied, 39.3796 s, 27.3 MB/s Anyway: how can I prove to my VPS host that it is to slow? I can understand a server being busy slowing a website down. But 5-30 sec loadtime for a normal PHP webpage?

    Read the article

  • Resetting root password on Fedora Core 3 - serial cable access only

    - by Sensible Eddie
    A little background: We have an old rackmount server running a customised version of Fedora, manufactured by a company called Navaho. The server is a TeamCAT, running some proprietary rubbish called Freedom2. We have to keep it going - the alternative is extraordinarily expensive, and the business is not likely to be running much longer to justify changing things. Through one means or another, it has fallen upon me to try and resolve our lack of root access. The previous admin has fallen under the proverbial bus, and nobody has any clue. We have no access to the root account for this server. ssh is running on the server, and there is one account admin that we can login with, however it has no permission to do anything (ironic...) The only other way into the server is with a null-modem serial cable. This works... up to a point. I can see the BIOS, I can see the post BIOS screen, and then I see "Starting grub", followed by another screen with about four lines of Linux information, but then it stops at that point. The server continues booting, and all services come online after around two minutes, but the serial terminal displays no more information. I understand it is possible to put Linux into "single user mode" to reset a root password, but I have no idea how to do this beyond trying to interrupt it at the grub stage listed above. When I have tried it just froze. It was almost like grub had appeared (since the server did not continue booting) but I couldn't see it on the serial terminal. Which made me think maybe the grub screen has some different serial settings? I don't know... it's the first time I've ever used serial for access! A friend of mine suggested trying to use a Fedora boot CD. We could boot from USB, so something along this approach is possible but again we still can only see what's going on with the serial terminal, so it might not be achievable. Does anyone have any suggestions for things I can try? I appreciate this is a bit of a long shot, but any assistance would be invaluable. *UPDATE 1 - 28/8/12 * - we will be making some attempts on this today and will post further details later!

    Read the article

  • Low FPS in some games, but hardware not fully used

    - by Mario De Schaepmeester
    I just did a little funny experiment in the game/sim "Train Simulator 2013". I normally have good FPS in it (around 30) at full settings. What I did was make a really, really long train so that the calculations the sim needed to make were enormous (the sim is quite realistic, it takes all things into account like speed/acceleration, G-forces, comfort levels, possible wheel slip and many more, and most of those things on each carriage seperately). This resulted in only 14FPS as reported by the game, but it felt more like 8FPS or so. I have a Logitech G15 keyboard which has an LCD, and it allows me to monitor CPU/RAM and video card load on it. The strange thing is, all CPU cores were busy, but the total load was only about 60% maximum at all times. The video card was only on 30% load (possibly an important note, the memory was full, which is however not unusual for the game in question). The RAM had plenty of room and there weren't many operations as it didn't grow or shrink much. I just have the feeling that the game would run smoother if it used more of my hardware power. Why is it not doing so? I had the same in another game, The Elder Scrolls: Morrowind when using more than 100 mods (that all use scripting) and a few high res texture mods, + a full-on graphics improvement program. The engine is very old (2003), and so I thought this might be the cause (not being optimised for multithreading). I had thought of possible causes, like: The operating system doesn't let the games use all the resources. It doesn't make use of multi-threading appropriately. To eliminate the former, I tried a CPU stress tool and that got 100% CPU juice as I let it run, so the OS is not the problem. I gave its thread the "higher" priority though. My actual question In both games, I did things the engine was not really built to do or support. Can those games' framerate be limited cause of their own engine not being able to cope? What is the real reason and more importantly, can I help it? And in any case, could something actually be wrong with my hardware? It's all reasonably new, a couple of months, and I (almost) never experience any other trouble. Modern and much more demanding games work absolutely fine. Specs CPU: AMD Phenom II 965 X4 @ 3.4gHz RAM: 8GB of DDR3 RAM Video: MSI GTX560 (nVidia chip) with 1GB of GDDR5 memory OS: Windows 7 Ultimate 64 bit Nothing overclocked.

    Read the article

  • I am getting a 400 Bad Request error when using Nginx and PHP-FPM, why?

    - by Bob
    I am trying to run a website (that requires PHP - it technically doesn't require MySQL at this time, but it may sometime in the near future as I continue developing it, so I went ahead and installed that as well) using nginx 1.2.4 and PHP-FPM 5.3.3 on Ubuntu 12.04.1 LTS. As far as I know, I haven't done anything wrong, but clearly something is not quite right - I seem to be getting a 400 Bad Request error whenever I try to browse to my website. I've been mostly following one guide, and I've done more or less everything it recommends, except for not setting up PHP-FPM to use a Unix Socket and I used service as opposed to /etc/init.d/ when starting/stopping nginx, PHP, and MySQL. Anyways, here are my relevant configuration files (I have only censored personal/sensitive details, like my domain name - which contains my real name): /etc/nginx/nginx.conf user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 15; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } /etc/nginx/sites-enabled/subdomain.mydomain.net server { listen 80; # listen for IPv4 listen [::]:80; # listen for IPv6 server_name www.subdomain.mydomain.net subdomain.mydomain.net; access_log /srv/www/subdomain.mydomain.net/logs/access.log; error_log /srv/www/subdomain.mydomain.net/logs/error.log; location / { root /srv/www/subdomain.mydomain.net/public; index index.php; } location ~ \.php$ { try_files $uri =400; include fastcgi_params; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /srv/www/subdomain.mydomain.net/public$fastcgi_script_name; } } All the directories listed in the configuration files above are correct on my server (to the extent of my knowledge). I have not included /etc/php5/fpm/pool.d/www.conf or /etc/php5/fpm/php.ini in this post as they're rather long, but I have posted them on Pastebin: http://pastebin.com/ensErJD8 and http://pastebin.com/T23dt7vM, respectively. Although, the only thing I've changed in either of the two files was in php.ini, where I set expose_php to off so as to hide the .php file extension from users. What can I do to resolve my issue? Please let me know if I need to supply any additional details.

    Read the article

  • Correcting tree from messed up file tree in NTFS partition

    - by Fullmooninu
    It's a real messed situation, but I'm quite at the end of my options. It's my personal hardrive, so it's very important for me, and yes, I have no backup =( The short story: 1) I have two discs. One with Windows, and another where I had a bit of empty space at the front of the disk, so i could install Linux. The rest was occupied by a 1.8TB NTFS partition filled with data. 2) I installed Linux, and after a while realized there was not enough space for everything, so I tried using Gparted, and told it to re-size the NTFS partition, to a lesser size. 3) The system jammed. I had to reboot and broke the Resizing operation. Here's what I did to fix it: a) Rebooted into Linux Live, and used Testdisk,to deep analyze the disk, and recover the possible partitions. It found several versions of the NTFS partitions, probably made during the resizing. I told Testdisk to open every one of them, and only one could list its files. When trying to open the other options on Testdisk, it showed an error message. I assumed the one without errors, to be the correct one, and I told Testdisk to recover the partition, and write a new MBR. b) The partition had errors, and Linux has a NTFS fixing tool, used it, but the system still had errors. c) So I booted into windows and use chkdsk to correct all errors in the partition. d) Everything seems fine, but now, back in Windows, when I open one file, it opens another file, or part of another file. As in, some files took up the position of other files. What I think happened is that I recovered an old tree, and not the most current one. And that one just happened to be intact, while the most recent one was damaged. As such, the files that were moved during the failed resizing, were now, during the automatic correction, assumed wrongly to be in their correct places. So when I open a file, it tries to open another one. Radiohead - Creep.mp3 will open and it will actually be a bit from another song, or even code from a jpg. Some files seem to be all right, but others have seemed to have had their position taken by others. Anyone knows of something really powerful that can help me solve this?

    Read the article

  • Hardware for multipurpose home server

    - by Michael Dmitry Azarkevich
    Hi guys, I'm looking to set up a multipurpose home server and hoped you could help me with the hardware selection. First of all, the services it will provide: Hosting a MySQL database (for training and testing purposes) FTP server Personal Mail Server Home media server So with this in mind I've done some research, and found some viable solutions: A standard PC with the appropriate software (Either second hand or new) A non-solid state mini-ITX system A solid state, fanless mini-ITX system I've also noted the pros and cons of each system: A standard second hand PC with old hardware would be the cheapest option. It could also have lacking processing power, not enough RAM and generally faulty hardware. Also, huge power consumption heat generation and noise levels. A standard new PC would have top-notch hardware and will stay that way for quite some time, so it's a good investment. But again, the main problem is power consumption, heat generation and noise levels. A non-solid state mini-ITX system would have the advantages of lower power consumption, lower cost (as far as I can see) and long lasting hardware. But it will generate noise and heat which will be even worse because of the size. A solid state, fanless mini-ITX system would have all the advantages of a non-solid state mini-ITX but with minimal noise and heat. The main disadvantage is the read\write problems of flash memory. All in all I'm leaning towards a non-solid state mini-ITX because of the read\write issues of flash memory. So, after this overview of what I do know, my questions are: Are all these services even providable from a single server? To my best understanding they are, but then again, I might be wrong. Is any of these solutions viable? If yes, which one is the best for my purposes? If not, what would you suggest? Also, on a more software oriented note: OS wise, I'm planning to run Linux. I'm currently thinking of four options I've been recommended: CentOS, Gentoo, DSL (Damn Small Linux) and LFS (Linux From Scratch). Any thoughts on this? Any other distro you would recomend? Regarding FTP services, I've herd good things about FileZila. Anyone has any experience with that? Do you recommend it? Do you recommend something else? Regarding the Mail service, I know nothing about this except that it exists. Any software you recommend for this task? Home media, same as mail service. Any recommended software? Thank you very much.

    Read the article

  • Detection of battery status totally messed up

    - by Faabiioo
    I already posted this question in the Ubuntu forum and stackOverflow. I forward it here with the hope to find some different opinions about the problem. I have an Acer TravelMate 5730, which is 3 y.o., running Ubuntu 10.04 LTS. One year ago I changed the battery because the old one died. Since then, everything worked like a charm. A week ago I was using my laptop running on battery; it was charged up to 60%. Suddenly it shut down and for about 24h it was like the battery was totally broken: it didn't charge anymore and the 'upower --dump' said state: critical. I was kind of resigned to buy a new battery, when suddenly the orange light became green: battery was charged and actually working; strangely the battery indicator was stuck to 100%, even after 2 hours running. I tried again with 'upower --dump' or 'acpi -b' commands and it kept saying battery is discharging, though maintaining the percentage to 100%. Thus, battery working fine up to 3 hours, without any warning when it was almost empty, likely to result in a brute shut down. Today something different. the 'upower --dump' command says: ... present: yes rechargeable: yes state: fully-charged energy: 0 Wh energy-empty: 0 Wh energy-full: 65.12 Wh energy-full-design: 65.12 Wh energy-rate: 0 W voltage: 14.481 V percentage: 0% capacity: 100% technology: lithium-ion I tried to boot WinXP and the problem is pretty much the same, with the battery fully-charged, percentage equal to 0% and no way to fix it. While writing, the situation has changed again: present: yes rechargeable: yes state: charging energy: 0 Wh energy-empty: 0 Wh energy-full: 65.12 Wh energy-full-design: 65.12 Wh energy-rate: 0 W voltage: 14.474 V percentage: 0% capacity: 100% technology: lithium-ion ...charging, but it does not charge up. (Recall, the battery lasted 3 hours until yesterday!). So, the big question is: is it an hardware issue, like a dedicated internal circuit is broken? or maybe it is just the battery that must be changed. Or, rather, some BIOS problem that could be fixed in some way. I'd appreciate every help that can shed some light on this annoying problem thanks

    Read the article

  • Need help recovering a corrupt SQL database

    - by user570079
    I have a very special case that I have been working on for several days. I have a very large SQL Server 2008 database (about 2 TB) that contains 500 filegroups to support very large partitioned tables. Recently we had a catastophic failure on one of the drive and lost several filegroups and the database became in-accessible. We have been doing filegroup backups on a daily basis, but due to other issues, we lost our most recent backup of the log and the primary filegroup. We have all the data backed up but the primary filegroup backup is old. There have been no schema changes since the primary filegroup backup, but the lsn's are now all out of sync and we cannot recover the data. I have tried everything I could think of (and have tried just about every trick and hack I could google) but I still end up at the same point where I get messages saying that the files for filegroup x do not match the primary filegroup. I am now at the point of trying to edit the system tables (we have a separate temporary environment to do this so we are not worried about corrupting any production databases). I have tried updated sys.sysdbreg, sys.sysbrickfiles, and sys.sysprufiles to try to trick SQL into thinking all the files are online, but a "Select * From OPENROWSET(TABLE DBPROP, 5)" shows a different database state from what I see in sys.sysdbreg. I am now thinking I need to somehow edit the headers of the actual data files to try to line up the lsn's with the primary. I appreciate any help anyone can give me here, but please do not respond with things like "you are not supposed to do edit mdf, ndf files...." or "see msdn article....", etc. This is an advanced emergency case and I need a real hack so we can just get to the data in this corrupt database and export to a fresh new database. I know there is a way to do this, but not knowing what the DBPROP system functions does (i.e. does it look at system tables or does it actually open the file) is keeping me from trying to figure out how to fool SQL into allowing me to read these files. Thanks for any help.

    Read the article

  • Possible HDD malfunction. Need help in diagnosing

    - by Protheus
    Today when using my PC as I did for almost 4 years I experienced the following: during opening new tab in Opera browser screen froze. Music (AIMP 3) continued to play for about 5 minutes and then stopped too. I tried Ctrl+Alt+Del, but win7 lock screen didn't appear. Caps\Scroll or Num locks didn't switch diodes on keyboard. I rebooted my PC and saw that BIOS suggests me to enter it's settings or load by default. I chose default. It don't see proper boot device (old faitful "insert proper boot" something). After second reboot it said that there is no ExpressGate installed (which i turned off in BIOS years ago). I went into BIOS setting to turn off ExpressGate and see configs: time was not set off, all hard drives present, temp and O.C. settings are nominal (no O.C.) I've inserted my Win7 install disk to try recovery. It did load awfully long (about few minutes) and didn't see current installation. PC was utilized in 24/7 mode for almost all these years. Hardware configuration: ASUS P5Q WS Core 2 Quad Q9300 (2.5GHz no O.C.) MSI geForce GTX 460 4x2 Gb GeIL EVO 2 (AFAIR) Seagate something 750Gb (4 years as system HDD 24/7) WD 1Tb (for random stuff, 5 y.o.) Hitachi 500Gb (for even more random stuff, 6 y.o.) NEC DVDRW (ALL DISKS ARE SATA) Cooler Master Silent Pro 700W Software: Windows 7 AND Kubuntu on the same drive with GRUB loader. Sorry I can't remember HDDs and can't see them right now, but I think their models aren't relevant anyway. My idea is that due to some system error or hard drive glitch i've wrecked my primary HDD's MBR. Nevertheless I don't exclude the possibility of other failure. May it's be that motherboard or it's SATA controller? Doubt it, because all drives are seen in BIOS and I could load from DVD. Maybe GRUB got bugged somehow, although I don't see how it's possible from Windows. But I did install KUbuntu from Windows (i wasn't myself then), maybe GRUB did write itself in some windows partition and got rewriteen in process? Right now I am at work with my flash drive with me and I need some advice how to fix MBR or to hear if it's not MBR. I'm going to buy new HDD (Hitachi 7k2000) because I think that my current HDD is compromised and it's unsafe to use it as system drive, especially 24/7.

    Read the article

  • BlueScreens on my ThinkPad with Windows 7 64 Bit and a SSD (CRITICAL_OBJECT_TERMINATION, ntoskernel.exe)

    - by pvorb
    I'm getting BlueScreens about every five days for more than three months. Here's an example: A problem has been detected and Windows has been shut down to prevent damage to your computer. The problem seems to be caused by the following file: ntoskrnl.exe CRITICAL_OBJECT_TERMINATION If this is the first time you've seen this stop error screen, restart your computer. If this screen appears again, follow these steps: Check to make sure any new hardware or software is properly installed. If this is a new installation, ask your hardware or software manufacturer for any Windows updates you might need. If problems continue, disable or remove any newly installed hardware or software. Disable BIOS memory options such as caching or shadowing. If you need to use safe mode to remove or disable components, restart your computer, press F8 to select Advanced Startup Options, and then select Safe Mode. Technical Information: *** STOP: 0x000000f4 (0x0000000000000003, 0xfffffa80065f2b30, 0xfffffa80065f2e10, 0xfffff80002f9bf40) *** ntoskrnl.exe - Address 0xfffff80002c98d00 base at 0xfffff80002c19000 DateStamp 0x4d9fdd5b It's has always been the same BlueScreen message showing CRITICAL_OBJECT_TERMINATION, 0x000000f4, and ntoskrnl.exe. Of course the addresses change. My computer is a ThinkPad T400 (about 2 years old) with a SSD in it. I'm also running Windows 7 Professional 64 bit. When I bought my computer, it had a 250GByte SeaGate HDD in it, which I replaced by a 500GByte HDD by Western Digital. Last september I bought a Corsair F120 SSD and replaced the HDD by this SSD. Then I bought a LEICKE HDD adapter for the UltraBay II where I plugged in my 500GByte HDD. This configuration ran about half a year without any errors. After re-installing Windows this spring, I am getting regular BlueScreens. Sometimes my system runs for about 2 weeks without a BSOD, sometimes I get several BlueScreens a day. The only thing that I noticed is, that I'm always running Google Chrome when it happens. Is there anyone who has made his/her own bad experiences whith some of my components or is there anybody who can tell me if it would be helpful to send my notebook to Lenovo? Thank you very much for your help on my issue! Regards, Paul

    Read the article

  • Is there a way to replicate a very large file shares in real-time?

    - by fsckin
    I have an hourly cron job that copies about 40GB of data from a source folder into a new folder with the hour appended on the end. When it's done, the job prunes anything older than 24 hours. This data changes very often during work hours and is on a samba file share. Here's how the folder structure looks: \server\Version.1 \server\Version.2 \server\Version.3 ... \server\Version.24 The contents of each new folder compared to the last one usually doesn't change very much, since this is a hourly job. Now you might be thinking that I'm an idiot for setting dreaming this up. Truth is, I just found out. It's actually been used for years and is so incredibly simple, anyone could delete the ENTIRE 40GB share (imagine that dialog spooling up... deleting thousands and thousands of files) and it would actually be faster to restore by moving the latest copy back to the source than it took to delete. Brilliant! Now to top this off, I need to efficiently replicate this 960GB of "mostly similar" data to a remote server over WAN link, with the replication happening as close to real-time as possible -- think hot spare, disaster recovery, etc. My first thought was rsync. Total failure. Rsync sees it sees a deletion of the folder that is 24 hours old and the addition of a new folder with 30GB of data to sync! I also looked at rdiff-backup and unison, they both appear to use similar algorithms and do not keep enough meta-data to do this intelligently. Best thing that I can find "out of the box" to do this is Windows Server "Distributed Filesystem Replication" which uses "Remote Differential Compression" -- After reading the background information on how this works, it actually looks like exactly what I need. Problem: Both servers are running Linux. D'oh! One approach to this I'm looking at is this, say it's 5AM and the cron job finishes: New Version.5 folder arrives at on local server SSH to remote server and copy Version.4 to Version.5 Run rsync on the local server pushing changes to the remote server. Rsync finally knows to do a differential copy between Version.4 and Version.5 Is there a smarter way to replicate Samba shares as close to real-time as possible? Anything out there that does "Remote Differential Compression" on Linux?

    Read the article

  • domain/IN: has no NS records

    - by thejartender
    I have set up a home web server using Ubuntu 12.10 and I can safely say that it works with regards to router forwarding and ports being found. I know this, because switched my hosting provider's VPS SOA record to use my ISP IP with an 'A' value and had my website running from home. This verified that my server was configured correctly so I started what I believe to be the final step in making my old desktop into a full DNS server. I found this tutorial that got me started My LAN network consists of the following: My router with a gateway of 10.0.0.zzz My server with an IP of 10.0.0.xxx A laptop with an IP of 10.0.0.yyy Step 1: I installed bind via sudo apt-get install bind9 Step2: I configured /etc/bind/named.conf.local with: zone "sognwebdesign.no" { type master; file "/etc/bind/zones/sognwebdesign.no.db"; }; zone "0.0.10.in-addr.arpa" { type master; file "/etc/bind/zones/rev.0.0.10.in-addr.arpa"; }; Step3: Updated /etc/bind/named.conf.options with two ISP DNS addresses Step 4: Updated /etc/resolv.confwith: nameserver 10.0.0.xxx search lan search sognwebdesign.no Step5: created a ``/etc/bind/zones directory Step6: Created /etc/bind/zones/sognwebdesign.no.dbwith: $TTL 3D @ IN SOA ns.sognwebdesign.no. admin.sognwebdesign.no. ( 2007062001 28800 3600 604800 38400 ); sognwebdesign.no. IN NS ns1.sognwebdesign.no. sognwebdesign.no. IN NS ns2.sognwebdesign.no. sognwebdesign.no. IN NS ns3.sognwebdesign.no. NS1 IN A 10.0.0.1 NS2 IN A 10.0.0.2 NS3 IN A 10.0.0.3 www IN A 10.0.0.4 yuccalaptop IN A 10.0.0.19 gw IN A 10.0.0.138 TXT "Network Gateway" Step 7: created/etc/bind/zones/rev.0.0.10.in-addr.arpawith: $TTL 3D @ IN SOA ns.sognwebdesign.no. admin.sognwebdesign.no. ( 2007062001 28800 604800 604800 86400 ); zzz IN PTR gw.sognwebdesign.no. 1 IN PTR ns1.sognwebdesign.no. 2 IN PTR ns2.sognwebdesign.no. 3 IN PTR ns3.sognwebdesign.no. yyy IN PTR yuccalaptop.sognwebdesign.no. I then restart bind and dig-x sognwebdesign.no and it works Lastly I perform named-checkzoneon each of my zone files, but me reverse zone fail fails with: sognwedesign.no/IN: has no NS records Can anyone explain what I am doing wrong here or assist me in getting this configured correctly?

    Read the article

  • A Windows Update Prevented Live Discs From Working?

    - by user88311
    First off I'll state my system specs. Acer Aspire M1100 Windows Vista Home Basic 32bit OEM 2 Gigs of DDR2 ram 160Gig hard drive 2.7Ghz AMD Athlon 64 processor ATI Radeon X1250 Graphics card A few days ago my computer did automatic updates and updated windows defender to KB915597 (Definition 1.135.415.0), After which when shutting down and starting up I would receive BSOD with the information BUGCODE_USE_DRIVER and 0x000000FE (0x00000008, 0x00000006, 0x00000006, 0x877330000) upon where my computer would not start up with any USB devices plugged in and it always require me to run startup repair before it started. Upon when I first started it up and was able to fully boot windows, I had no use of the mouse so I was unable to install the fix that the windows solutions center brought up on my screen, so I restarted again and installed the fix hoping it would cease the problem, it did not. Upoon starting up after installing the fix and restarting I was confronted with the BSOD 0x000000FE (0x00000008, 0x00000006, 0x00000006, 0x83291000) at which I found the startup repair could not fix the problem and I restored, as I most like should have in the first place. After going through that I read that simply installing the latest defender version from the microsoft site had fixed this problem for others, so I did that, to find I still received the BSOD's. So in a attempt to find a fix to the problem I went to the microsoft answers site to try to find a way to fix the problem, there I was told to simply disable defender and reboot to see if that fixed the problem, upon doing this my computer would no longer even startup, when I boot normally I get to just when the loading screen finishes and then my computer restarts and when I run startup repair, it runs for about 15 seconds and then my computer restarts as well. I have tried running ubuntu live discs in order to simply access the drive and simply copy and paste the 2 month old physical backup I have of my C drive to the C drive, but whenever I run the live OS when it gets to the end of the load screen and is about to boot, the computer again restarts, yet if I put in a gparted disc, I am able to boot it fully, although it does not give me access to the file system just partition managing and when I attempt to access the internet through it, the computer once again restarts. So my question is, how could the update and what has happened prevent me from running the live OS's properly?

    Read the article

  • Wifi network stopped being visible (and usable) (Linksys wag320n)

    - by s427
    Basically, my wifi network simply stopped working for no apparent reason. It doesn't appear in the list of the available networks anymore. I can see all my neighbors' networks, but not mine. It's as if it doesn't exist anymore. The internet connection (non-wifi), which goes through the same modem/router, is fine though. I already had a similar problem about one year ago (see here: Wifi network SSID not visible ), just after buying this very modem. I finally got it to work after performing two factory resets and getting rid of the Cisco "Magic" software; but this time it's not working. I use a linksys router-modem (WAG320N) which is directly connected (via network cable) to my desktop computer (Windows 7). I have (mainly) two devices that use the wifi network: my phone (Samsung Galaxy Nexus) and an Asus tablet (TF201, aka Transformer Prime). I also resurrected an old laptop computer (Dell, running Windows XP) to test that, and it doesn't see anything either (apart from the 20 other wifi networks, of course ^^). This wifi network was working just fine and has been for about a year. I haven't touched the modem settings so I have no idea what's causing the problem. I tried: making my phone "forget" about my network, hoping it would see it again after that: no luck. re-entering the network informations (SSID/password) manually on my phone: still no luck (says it's not in range) exporting the modem configuration, resetting the modem (factory reset, via modem admin), restarting it, importing the configuration: nope. factory reset, turning it off for 15 minutes, restarting, re-factory reset, and entering the configuration manually: still nothing. Has anybody experienced something similar before? Have you any suggestion to fix that? Thanks in advance. PS: to clear things up, here are the settings of my modem regarding wifi: Basic wireless settings: Configuration: manual Radio Band: 2.4GHz Wireless Network Mode: B/G/N-Mixed SSID: s427 Channel Bandwidth: Wide - 40 MHz Channel Wide Channel: 9 - 2.452GHz Standard Channel: 11 - 2.462GHz SSID Broadcast: Enable Advanced Wireless Settings AP Isolation: Disable Authentication Type: Auto Basic Rate: Default Transmission Rate: Auto N Transmission Rate: Auto CTS Protection Mode: Disable Beacon Interval: 100 DTIM Interval: 1 Fragmentation Threshold: 2346 RTS Threshold: 2346

    Read the article

  • 2 servers, high availability and faster response

    - by user17886
    I recently bought a second webserver because I worry about hardware failure of my old server. Now that I have that second server I wish to do a little more then just have one server standby and replicate all day. As long as it's there I might as well get some advantage our of it ! I have a website powered by ubuntu 12.04, nginx, php-fpm, apc, mysql (5.5) and couchdb. Im currently testing configurations where i can achieve failover AND make good use of the extra harware for faster responses / distributed load. The setup I am testing nowinvolves heartbeat for ip failover and two identical servers. Of the two servers only one has a public ip adress. If one server crashes the other server takes over the public ip adress. On an incoming request nginx forwards the request tot php-fpm to either server a of server b (50/50 if both servers are alive). Once the request has been send to php-fpm both servers look at localhost for the mysql server. I use master-master mysql replication for this. The file system is synced with lsyncd. This works pretty well but Im reading it's discouraged by the (mysql) community. Another option I could think of is to use one server as a mysql master and one server as a web/php server. The servers would still sync their filesystem, would still run the same duplicate software (nginx,mysql) but master slave mysql replication could be used. As long as bother servers are alive I could just prefer nginx to listen to ip a and mysql to ip b. If one server is down, the other server could take over the task of the other server, simply by ip switching. But im completely new at this so I would greatly value your expert advice. Is either of the two setups any good ? If you have any thoughts on this please let me know ! PS, virtualisation, hosting on different locations or active/passive setups are not solutions im looking for. I find virtual server either too slow or too expensive. I already have a passive failover on another location. But in case of a crash I found the site was still unreachable for too long due to dns caching.

    Read the article

  • Windows 7 startup MUCH slower after reinstall (on an SSD)

    - by user326639
    I installed Windows 7 Prof 64 bits OEM (Spanish) on my new machine. As I wanted my Windows to be in English, the web shop where I bought the DVD recomended me to download an ISO file with the same Windows version (but in English), burn it on a DVD and install it. And that I should be able to use my registration code. Location ISO: http://msft-dnl.digitalrivercontent.net/msvista/pub/X15-65805/X15-65805.iso I've done this and everything works (I have not activated my Windows yet but I expect no problem there). Just one thing: its startup is MUCH slower now! Have a look at my PC specs (bottom). On my first install (Spanish), it was like: - motherboard splash screen -- shows for a second or two - list of found drives -- a few seconds - the text "Windows starting" -- about a second before the dots appear - four collored dots form the Windows logo -- a few seconds after the logo is fully formed it moves on to the login screen. On my second install (English): - motherboard splash screen -- shows for 15 seconds - list of found drives -- a few seconds - the text "Windows starting" -- shows for 40 seconds before the dots appear - four collored dots form the Windows logo -- now it moves on to the login screen about equally fast as before. Ones it's up and running it seems to be as responsive as before, although it's possible that I'm not noticing the difference. I did the first install on the virgin SSD drive straight from the box. The second time I let the Windows installation program format the drive first to get rid of the old installation. I noticed that there were two partitions on my SSD: partition 1, 100 Mb, "reserved for the system" and partition 2, 111.7 Gb. I only formated the big partition, and I left the system partition untouched. Between the two installs, I didn't open the computer so everything is connected to the same port. I did not change anything in BIOS. Has Windows not recognized my SSD as an SSD but as a normal HDD. I suspect that Windows has not done the neccesary automatic configuration settings that it should do for SSD's (but that's just a hunch). How do I get my SSD back into its virgin state, as if it came right from the box, so I can go for a 3rd attempt to install windows. Should I use DISKPART? Other ideas are welcome. Specifications: mobo: Gigabyte GA-Z68X-UD3H-B3 CPU: i7-2600K SSD: OCZ Agility3 2,5" HDD: Samsung Spinpoint F4 mem: Kingston HyperX DIMM 8 Gb DDR3-1600

    Read the article

  • Blocking access to websites with objective-C / root privileges in objective-C

    - by kvaruni
    I am writing a program in Objective-C (XCode 3.2, on Snow Leopard) that is capable of either selectively blocking certain sites for a duration or only allow certain sites (and thus block all others) for a duration. The reasoning behind this program is rather simple. I tend to get distracted when I have full internet access, but I do need internet access during my working hours to get to a number of work-related websites. Clearly, this is not a permanent block, but only helps me to focus whenever I find myself wandering a bit too much. At the moment, I am using a Unix script that is called via AppleScript to obtain Administrator permissions. It then activates a number of ipfw rules and clears those after a specific duration to restore full internet access. Simple and effective, but since I am running as a standard user, it gets cumbersome to enter my administrator password each and every time I want to go "offline". Furthermore, this is a great opportunity to learn to work with XCode and Objective-C. At the moment, everything works as expected, minus the actual blocking. I can add a number of sites in a list, specify whether or not I want to block or allow these websites and I can "start" the blocking by specifying a time until which I want to stay "offline". However, I find it hard to obtain clear information on how I can run a privileged Unix command from Objective-C. Ideally, I would like to be able to store information with respect to the Administrator account into the Keychain to use these later on, so that I can simply move into "offline" mode with the convenience of clicking a button. Even more ideally, there might be some class in Objective-C with which I can block access to some/all websites for this particular user without needing to rely on privileged Unix commands. A third possibility is in starting this program with root permissions and the reducing the permissions until I need them, but since this is a GUI application that is nested in the menu bar of OS X, the results are rather awkward and getting it to run each and every time with root permission is no easy task. Anyone who can offer me some pointers or advice? Please, no security-warnings, I am fully aware that what I want to do is a potential security threat.

    Read the article

  • This page calls for XML namespace declared with prefix br but no taglibrary exists

    - by Christopher W. Allen-Poole
    I just finished the Netbeans introduction to Hibernate tutorial ( http://netbeans.org/kb/docs/web/hibernate-webapp.html#01 ) and I am getting the following error: "This page calls for XML namespace declared with prefix br but no taglibrary exists" Now, I have seen a similar question somewhere else: http://forums.sun.com/thread.jspa?threadID=5430327 but the answer is not listed there. Or, if it is, then I am clearly missing it -- line one of my index.xhtml file reads "http://www.w3.org/1999/xhtml". It also does not explain why, when I reload localhost:8080, the message disappears. Here is my index.xhtml file: <html xmlns="http://www.w3.org/1999/xhtml" xmlns:h="http://java.sun.com/jsf/html" xmlns:ui="http://java.sun.com/jsf/facelets" xmlns:f="http://java.sun.com/jsf/core"> <ui:composition template="./template.xhtml"> <ui:define name="body"> <h:form> <h:commandLink action="#{filmController.previous}" value="Previous #{filmController.pageSize}" rendered="#{filmController.hasPreviousPage}"/> <h:commandLink action="#{filmController.next}" value="Next #{filmController.pageSize}" rendered="#{filmController.hasNextPage}"/> <h:dataTable value="#{filmController.filmTitles}" var="item" border="0" cellpadding="2" cellspacing="0" rowClasses="jsfcrud_odd_row,jsfcrud_even_row" rules="all" style="border:solid 1px"> <h:column> <f:facet name="header"> <h:outputText value="Title"/> </f:facet> <h:outputText value="#{item.title}"/> </h:column> <h:column> <f:facet name="header"> <h:outputText value="Description"/> </f:facet> <h:outputText value="#{item.description}"/> </h:column> <h:column> <f:facet name="header"> <h:outputText value=" "/> </f:facet> <h:commandLink action="#{filmController.prepareView}" value="View"/> </h:column> </h:dataTable> <br/> </h:form> </ui:define> </ui:composition> </html>

    Read the article

  • Unable to resolve class in build.gradle using Android Studio 0.60/Gradle 0.11

    - by saywhatnow
    Established app working fine using Android Studio 0.5.9/ Gradle 0.9 but upgrading to Android Studio 0.6.0/ Gradle 0.11 causes the error below. Somehow Studio seems to have lost the ability to resolve the android tools import at the top of the build.gradle file. Anyone got any ideas on how to solve this? build file 'Users/[me]/Repositories/[project]/[module]/build.gradle': 1: unable to resolve class com.android.builder.DefaultManifestParser @ line 1, column 1. import com.android.builder.DefaultManifestParser 1 error at org.codehaus.groovy.control.ErrorCollector.failIfErrors(ErrorCollector.java:302) at org.codehaus.groovy.control.CompilationUnit.applyToSourceUnits(CompilationUnit.java:858) at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:548) at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:497) at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:306) at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:287) at org.gradle.groovy.scripts.internal.DefaultScriptCompilationHandler.compileScript(DefaultScriptCompilationHandler.java:115) ... 77 more 2014-06-09 10:15:28,537 [ 92905] INFO - .BaseProjectImportErrorHandler - Failed to import Gradle project at '/Users/[me]/Repositories/[project]' org.gradle.tooling.BuildException: Could not run build action using Gradle distribution 'http://services.gradle.org/distributions/gradle-1.12-all.zip'. at org.gradle.tooling.internal.consumer.ResultHandlerAdapter.onFailure(ResultHandlerAdapter.java:53) at org.gradle.tooling.internal.consumer.async.DefaultAsyncConsumerActionExecutor$1$1.run(DefaultAsyncConsumerActionExecutor.java:57) at org.gradle.internal.concurrent.DefaultExecutorFactory$StoppableExecutorImpl$1.run(DefaultExecutorFactory.java:64) [project]/[module]/build.gradle import com.android.builder.DefaultManifestParser apply plugin: 'android-sdk-manager' apply plugin: 'android' android { sourceSets { main { manifest.srcFile 'src/main/AndroidManifest.xml' res.srcDirs = ['src/main/res'] } debug { res.srcDirs = ['src/debug/res'] } release { res.srcDirs = ['src/release/res'] } } compileSdkVersion 19 buildToolsVersion '19.0.0' defaultConfig { minSdkVersion 14 targetSdkVersion 19 } signingConfigs { release } buildTypes { release { runProguard false proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.txt' signingConfig signingConfigs.release applicationVariants.all { variant -> def file = variant.outputFile def manifestParser = new DefaultManifestParser() def wmgVersionCode = manifestParser.getVersionCode(android.sourceSets.main.manifest.srcFile) println wmgVersionCode variant.outputFile = new File(file.parent, file.name.replace("-release.apk", "_" + wmgVersionCode + ".apk")) } } } packagingOptions { exclude 'META-INF/LICENSE.txt' exclude 'META-INF/NOTICE.txt' } } def Properties props = new Properties() def propFile = file('signing.properties') if (propFile.canRead()){ props.load(new FileInputStream(propFile)) if (props!=null && props.containsKey('STORE_FILE') && props.containsKey('STORE_PASSWORD') && props.containsKey('KEY_ALIAS') && props.containsKey('KEY_PASSWORD')) { println 'RELEASE BUILD SIGNING' android.signingConfigs.release.storeFile = file(props['STORE_FILE']) android.signingConfigs.release.storePassword = props['STORE_PASSWORD'] android.signingConfigs.release.keyAlias = props['KEY_ALIAS'] android.signingConfigs.release.keyPassword = props['KEY_PASSWORD'] } else { println 'RELEASE BUILD NOT FOUND SIGNING PROPERTIES' android.buildTypes.release.signingConfig = null } }else { println 'RELEASE BUILD NOT FOUND SIGNING FILE' android.buildTypes.release.signingConfig = null } repositories { maven { url 'https://repo.commonsware.com.s3.amazonaws.com' } maven { url 'https://oss.sonatype.org/content/repositories/snapshots/' } } dependencies { compile 'com.github.gabrielemariotti.changeloglib:library:1.4.+' compile 'com.google.code.gson:gson:2.2.4' compile 'com.google.android.gms:play-services:+' compile 'com.android.support:appcompat-v7:+' compile 'com.squareup.okhttp:okhttp:1.5.+' compile 'com.octo.android.robospice:robospice:1.4.11' compile 'com.octo.android.robospice:robospice-cache:1.4.11' compile 'com.octo.android.robospice:robospice-retrofit:1.4.11' compile 'com.commonsware.cwac:security:0.1.+' compile 'com.readystatesoftware.sqliteasset:sqliteassethelper:+' compile 'com.android.support:support-v4:19.+' compile 'uk.co.androidalliance:edgeeffectoverride:1.0.1+' compile 'de.greenrobot:eventbus:2.2.1+' compile project(':captureActivity') compile ('de.keyboardsurfer.android.widget:crouton:1.8.+') { exclude group: 'com.google.android', module: 'support-v4' } compile files('libs/CWAC-LoaderEx.jar') }

    Read the article

  • jQuery override default validation error message display (Css) Popup/Tooltip like

    - by Phill Pafford
    I'm trying to over ride the default error message label with a div instead of a label. I have looked at this post as well and get how to do it but my limitations with CSS are haunting me. How can I display this like some of these examples: Example #1 (Dojo) - Must type invalid input to see error display Example #2 Here is some example code that overrides the error label to a div element $(document).ready(function(){ $("#myForm").validate({ rules: { "elem.1": { required: true, digits: true }, "elem.2": { required: true } }, errorElement: "div" }); }); Now I'm at a loss on the css part but here it is: div.error { position:absolute; margin-top:-21px; margin-left:150px; border:2px solid #C0C097; background-color:#fff; color:white; padding:3px; text-align:left; z-index:1; color:#333333; font:100% arial,helvetica,clean,sans-serif; font-size:15px; font-weight:bold; } UPDATE: Okay I'm using this code now but the image and the placement on the popup is larger than the border, can this be adjusted to be dynamic is height? if (element.attr('type') == 'radio' || element.attr('type') == 'checkbox') { element = element.parent(); offset = element.offset(); error.insertBefore(element) error.addClass('message'); // add a class to the wrapper error.css('position', 'absolute'); error.css('left', offset.left + element.outerWidth()); error.css('top', offset.top - (element.height() / 2)); // Not working for Radio, displays towards the bottom of the element. also need to test with checkbox } else { // Error placement for single elements offset = element.offset(); error.insertBefore(element) error.addClass('message'); // add a class to the wrapper error.css('position', 'absolute'); error.css('left', offset.left + element.outerWidth()); error.css('top', offset.top - (element.height() / 2)); } the css is the same as below (your css code) Html <span> <input type="radio" class="checkbox" value="P" id="radio_P" name="radio_group_name"/> <label for="radio_P">P</label> <input type="radio" class="checkbox" value="S" id="radio_S" name="radio_group_name"/> <label for="radio_S">S</label> </span>

    Read the article

< Previous Page | 649 650 651 652 653 654 655 656 657 658 659 660  | Next Page >