Search Results

Search found 2803 results on 113 pages for 'manca weeks'.

Page 97/113 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • Compression Program in C

    - by Delandilon
    I want to compress a series of characters. For example if i type Input : FFFFFBBBBBBBCCBBBAABBGGGGGSSS (27 x 8 bits = 216 bits) Output: F5B7C2B3A2B2G5S3 (14 x 8 bits = 112bits) So far this is what i have, i can count the number of Characters in the Array. But the most important task is to count them in the same sequence. I can't seem to figure that out :( Ive stared doing C just a few weeks back, i have knowledge on Array, pointers, ASCII value but in any case can't seem to count these characters in a sequence. Ive try a bit of everything. This approach is no good but it the closest i came to it. #include <stdio.h> #include <conio.h> int main() { int charcnt=0,dotcnt=0,commacnt=0,blankcnt=0,i, countA, countB; char str[125]; printf("*****String Manipulations*****\n\n"); printf("Enter a string\n\n"); scanf("%[^'\n']s",str); printf("\n\nEntered String is \" %s \" \n",str); for(i=0;str[i]!='\0';i++) { // COUNTING EXCEPTION CHARS if(str[i]==' ') blankcnt++; if(str[i]=='.') dotcnt++; if(str[i]==',') commacnt++; if (str[i]=='A' || str[i]=='a') countA++; if (str[i]=='B' || str[i]=='b') countA++; } //PRINT RESULT OF COUNT charcnt=i; printf("\n\nTotal Characters : %d",charcnt); printf("\nTotal Blanks : %d",blankcnt); printf("\nTotal Full stops : %d",dotcnt); printf("\nTotal Commas : %d\n\n",commacnt); printf("A%d\n", countA); }

    Read the article

  • PHP- Display days weekly by giving 2 dates

    - by librium
    I'd like display dates by week number between giving 2 dates like example below. Is this possible in PHP? if the dates are 2010-12-01 thru 2010-12-19, it will display it as follows. week-1 2010-12-01 2010-12-02 2010-12-03 2010-12-04 2010-12-05 2010-12-06 2010-12-07 week-2 2010-12-08 2010-12-09 2010-12-10 2010-12-11 2010-12-12 2010-12-13 2010-12-14 week-3 2010-12-15 2010-12-16 2010-12-17 2010-12-18 2010-12-19 and so on... I use mysql. It has startdate end enddate fields. thank you in advance. I can get how many weeks in those giving 2 dates and display them using a datediff('ww', '2010-12-01', '2010-12-19', false); I found on the internet. And I can display dates between two dates as follows. But I am having trouble grouping them by week. $sdate = "2010-12-01"; $edate = "2010-12-19"; $days = getDaysInBetween($sdate, $edate); foreach ($days as $val) { echo $val; } function getDaysInBetween($start, $end) { // Vars $day = 86400; // Day in seconds $format = 'Y-m-d'; // Output format (see PHP date funciton) $sTime = strtotime($start); // Start as time $eTime = strtotime($end); // End as time $numDays = round(($eTime - $sTime) / $day) + 1; $days = array(); // Get days for ($d = 0; $d < $numDays; $d++) { $days[] = date($format, ($sTime + ($d * $day))); } // Return days return $days; }

    Read the article

  • DOS "pause" in Linux?

    - by user2930466
    Firstly, I'm REALLY new to programming. I've just started my first programming class two weeks ago, and I apologize if I sound newbish. My professor wants me to implement a "press any key to continue..." thing in my program. Basically when I run a program, he wants one line to come up [like printf("jfdskaljlfja");] then what would come up is "press any key to continue," before the next line runs. he told us that the DOS equivalent is system("pause"), but he wants us to do it linux. This is what my code looks like: #include <stdio.h> int main() { printf("This is the first line of this program); system("pause"); printf("This is the second line); } Except he wants us to do this in Linux, so system("pause") won't work in this case. Is there a way to have it do exactly what pause does, but in linux terms? again, sorry if i sound newbish. thank you so much! Also, he doesn't really care if the code is efficient or anything, as long as it runs. Again, i'm really new to programming, so the simplest answer would be much appreciated :)

    Read the article

  • SMB2 traffic crashes network?

    - by Phil Cross
    We've been having significant network slowdown issues over the past few weeks, primarily on a Friday morning. We run Windows 7 client machines, with Windows Server 2008 R2 servers. What generally happens is the network starts to slow down massively at 08:55 and resumes normal speeds at around 09:20 This affects everything on the network from logging on, resetting passwords, opening programs and files etc. On my client machine, Physical Memory usage remains at around 40% (normal) and CPU usage hovers around 0-10% idle. The servers show memory usage spikes massively and remains quite intense during the times mentioned above. I have taken several wireshark captures, both during the slowdown and when the network operates fine. One of the main things I noticed is the increase in SMB2 entries in the wireshark log during the slowdown. Record Time Source Destination Protocol Length Info 382 3.976460000 10.47.35.11 10.47.32.3 SMB2 362 Create Request File: pcross\My Documents 413 4.525047000 10.47.35.11 10.47.32.3 SMB2 146 Close Request File: pcross\My Documents 441 5.235927000 10.47.32.3 10.47.35.11 SMB2 298 Create Response File: pcross\My Documents\Downloads 442 5.236199000 10.47.35.11 10.47.32.3 SMB2 260 Find Request File: pcross\My Documents\Downloads SMB2_FIND_ID_BOTH_DIRECTORY_INFO Pattern: *;Find Request File: pcross\My Documents\Downloads SMB2_FIND_ID_BOTH_DIRECTORY_INFO Pattern: * 573 6.327634000 10.47.35.11 10.47.32.3 SMB2 146 Close Request File: pcross\My Documents\Downloads 703 7.664186000 10.47.35.11 10.47.32.3 SMB2 394 Create Request File: pcross\My Documents\Downloads\WestlandsProspectus\P24 __ P21.pdf These are some of the SMB2 records from a list of a couple of hundred which original from my computer with a destination of the fileserver. One of the interesting things to note is the last entry in the examples above is for a PDF file. That file was not open anywhere on my computer, or on anyone elses. No folders with the files in were open either. When I took another capture when the network was running fine, there were hardly any SMB2 entries, and the ones that were displayed were mainly from Wireshark. We currently have around 800 computers, 90 Macs and 200 Laptops and Netbooks. Our concern is if this traffic is happening on my computer, is it happening on other computers, and if so, would those computers be adding to the slow network issues? Again, this only happens during certain times. We're pretty sure its not the our antivirus. Is there anything to narrow down whats initializing this SMB traffic during the particular times? Or if anyone has any extra advice, or links to resources it would be appreciate.

    Read the article

  • Fatal error 9001 on shared SQL Server 2008

    - by user643192
    I've asked this same question on StackOverflow, but I might actually have a better chance for an answer here so am posting here as well. I know this question has been asked here before, but none of the suggestions have worked for me. I have an ASP.NET MVC (v. 3) website on a shared server. The website was working fine for a few weeks now, until I started getting a Fatal Error 9001 error straight after login. Because this is a shared server, there are only very limited things I can do with the database (and I don't know that much about databases anyway). The help desk insist that there is nothing wrong with their server. I got various suggestions from them: Upgrading to the business plan because I am out of space (first suggestion) Even though the .mdb file is small, the .ldb can grow very quickly. The .ldb file is probably taking up all the space. I have 100MB available, the database size is 16.5MB. Can the .ldb file take up the remaining space? On querying this with the helpdesk, they admitted that my entire db is only 25MB. There is something wrong with my SQL queries and I should check the website. I'm using EF with linq to SQL. Everything was working fine until now... Can there be something that goes wrong in the queries that causes this sort of error? There is nothing wrong to be seen in the db logs, so this error cannot possibly have happened. I should log it next time it happens and contact again. I found some posts suggesting that restoring a DB backup can get rid of the issue. I do not have a recent backup, and can't take a new one because of a fatal error 9001 occurring. Since this is a shared server I have about 0 authority to execute anything against the DB (think CHECKDB, truncating the log, etc.). So I am at my wits end pretty much. What else can I do/try to get my website moving again?

    Read the article

  • Will this RAID5 setup work (3TB Seagate Barracudas + Adaptec RAID 6405)?

    - by Slayer537
    As the title states, will this RAID combo work, and if not what needs to be changed? Overall opinions would be most helpful. I currently run a small file server of about 5TB or so. I keep outgrowing my needs and need to build a RAID setup that will allow me to expand as needed. I am new to RAID setups, especially one of the scale I have currently planned out, but I have being doing some research for the past couple of weeks and have come up with a build. Ideally, I'd have the setup completely built, but I'd like to keep the total cost around $1k and can't afford to go above $1.5k, so unfortunately that's not an option. 2 of my current drives are WD Caviar Blacks 2TB; however, I have recently learned that due to the lack of TLER those drives are awful for any RAID setup other than 0 or 1. That being said, my third drive is a Seagate Barracuda 3TB (ST300DM001) and I have found a RAID controller that states it supports it, so I'd like to use this same type of drive, if possible. Have any of you had any experience using this drive or a similar one in a RAID5 configuration? The manufacturer states that it supports it, but knowing that it is not an enterprise drive, I am slightly concerned that it could drop out of the array. I would just go with enterprise drives, but those are about double in cost... Parts list: Storage rack: http://www.ebay.com/itm/SGI-3U-Media-Storage-Server-16-Hard-Drive-Bay-SATA-SAS-Expander-Omnistor-SE3016-/140735776937?pt=LH_DefaultDomain_0&hash=item20c48188a9 3 more HDs (for now..): http://www.amazon.com/Seagate-Barracuda-3-5-Inch-Internal-ST3000DM001/dp/B005T3GRLY/ref=dp_return_2?ie=UTF8&n=172282&s=electronics Adaptec RAID 6405: http://www.newegg.com/Product/Product.aspx?Item=N82E16816103224 here's a link to the compatibility sheet if that helps: http://download.adaptec.com/pdfs/compatibility_report/arc-sas_cr_03-27-12_series6.pdf SAS expander cable: http://www.pc-pitstop.com/sas_cables_adapters/8887-2M.asp My plan is to install the RAID card in my computer and then route the SAS cable to the rack. Setup a RAID5 on 3 drives, transfer my data over from my other drive, and then add that drive to the array. Eventually, I'd like to get a 2U unit and run the file server on that and move the RAID card over to there, but that will have to happen later on. Side note: The computer the card would be going into will be running Windows 7 Pro with 24GB of DDR3-1600 and an i7-930.

    Read the article

  • repeated failing passwords in linux security log (/var/log/secure)

    - by wallyk
    Recently, I opened up the SSH port through my firewalls (and redirecting to my server) so I could check on the (http) server while on the road. The first week or two there was nothing different. But now, three or four weeks later, I see lots of this: Mar 20 08:38:28 localhost sshd[21895]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=mail.queued.net user=root Mar 20 08:38:31 localhost sshd[21895]: Failed password for root from 207.210.101.209 port 2854 ssh2 Mar 20 15:38:31 localhost sshd[21896]: Received disconnect from 207.210.101.209: 11: Bye Bye Mar 20 08:38:32 localhost unix_chkpwd[21900]: password check failed for user (root) Mar 20 08:38:32 localhost sshd[21898]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=mail.queued.net user=root Mar 20 08:38:34 localhost sshd[21898]: Failed password for root from 207.210.101.209 port 3729 ssh2 Mar 20 15:38:35 localhost sshd[21899]: Received disconnect from 207.210.101.209: 11: Bye Bye Mar 20 08:38:36 localhost unix_chkpwd[21903]: password check failed for user (root) Mar 20 08:38:36 localhost sshd[21901]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=mail.queued.net user=root Mar 20 08:38:38 localhost sshd[21901]: Failed password for root from 207.210.101.209 port 4313 ssh2 Mar 20 15:38:38 localhost sshd[21902]: Received disconnect from 207.210.101.209: 11: Bye Bye Mar 20 08:38:40 localhost unix_chkpwd[21906]: password check failed for user (root) Mar 20 08:38:40 localhost sshd[21904]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=mail.queued.net user=root Mar 20 08:38:42 localhost sshd[21904]: Failed password for root from 207.210.101.209 port 4869 ssh2 Mar 20 15:38:43 localhost sshd[21905]: Received disconnect from 207.210.101.209: 11: Bye Bye Mar 20 08:38:44 localhost unix_chkpwd[21909]: password check failed for user (root) Mar 20 08:38:44 localhost sshd[21907]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=mail.queued.net user=root Mar 20 08:38:46 localhost sshd[21907]: Failed password for root from 207.210.101.209 port 2512 ssh2 Mar 20 15:38:47 localhost sshd[21908]: Received disconnect from 207.210.101.209: 11: Bye Bye Mar 20 15:38:57 localhost sshd[21912]: Connection closed by 207.210.101.209 There are about 1100 lines of these for March 20th, zero for the 19th, and 800 or so for the 18th—all related to the same IP. What does it mean? What should I do? Why isn't it chronological?

    Read the article

  • Error codes 80070490 and 8024200D in Windows Update

    - by Sammy
    How do get past these stupid errors? The way I have set things up is that Windows Update tells me when there are new updates available and then I review them before installing them. Yesterday it told me that there were 11 new updates. So I reviewed them and I saw that about half of them were security updates for Vista x64 and .NET Framework 2.0 SP2, and half of them were just regular updates for Vista x64. I checked them all and hit the Install button. It seemed to work at first, updates were being downloaded and installed, but then at update 11 of 11 total it got stuck and gave me the two error codes you see in the title. Here are some screenshots to give you an idea of what it looks like. This is what it looks like when it presents the updates to me. This is how it looks like when the installation fails. I'm not sure if you're gonna see this very well but these are the updates it's trying to install. Update: This is on Windows Vista Ultimate 64-bit with integrated SP2, installed only two weeks ago on 2012-10-02. Aside from this, the install is working flawlessly. I have not done any major changes to the system like installing new devices or drivers. What I have tried so far: - I tried installing the System Update Readiness Tool (the correct one for Vista x64) from Microsoft. This did not solve the issue. Microsoft resource links: Solutions to 80070490 Windows Update error 80070490 System Update Readiness Tool fixes Windows Update errors in Windows 7, Windows Vista, Windows Server 2008 R2, and Windows Server 2008 Solutions to 8024200D: Windows Update error 8024200d Essentially both solutions tell you to install the System Update Readiness Tool for your system. As I have done so and it didn't solve the problem the next step would be to try to repair Windows. Before I do that, is there anything else I can try? Microsoft automatic troubleshooter If I click the automatic troubleshooter link available on the solution web page above it directs me to download a file called windowsupdate.diagcab. But after download this file is not associated to any Windows program. Is this the so called Microsoft Fix It program? It doesn't have its icon, it's just blank file. Does it need to be associated? And to what Windows program?

    Read the article

  • Automating the choice between JPEG and PNG with a script

    - by MHC
    Choosing the right format to save your images in is crucial for preserving image quality and reducing artifacts. Different formats follow different compression methods and come with their own set of advantages and disadvantages. JPG, for instance is suited for real life photographs that are rich in color gradients. The lossless PNG, on the other hand, is far superior when it comes to schematic figures: Picking the right format can be a chore when working with a large number of files. That's why I would love to find a way to automate it. A little bit of background on my particular use case: I am working on a number of handouts for a series of lectures at my unversity. The handouts are rich in figures, which I have to extract from PDF-formatted slides. Extracting these images gives me lossless PNGs, which are needlessly large at times. Converting these particular files to JPEG can reduce their size to up to less than 20% of their original file size, while maintaining the same quality. This is important as working with hundreds of large images in word processors is pretty crash-prone. Batch converting all extracted PNGs to JPEGs is not an option I am willing to follow, as many if not most images are better suited to be formatted as PNGs. Converting these would result in insignificant size reductions and sometimes even increases in filesize - that's at least what my test runs showed. What we can take from this is that file size after compression can serve as an indicator on what format is suited best for a particular image. It's not a particularly accurate predictor, but works well enough. So why not use it in form of a script: I included inotifywait because I would prefer for the script be executed automatically as soon as I drag an extracted image into a folder. This is a simpler version of the script that I've been using for the last couple of weeks: #!/bin/bash inotifywait -m --format "%w%f" --exclude '.jpg' -r -e create -e moved_to --fromfile '/home/MHC/.scripts/Workflow/Conversion/include_inotifywait' | while read file; do mogrify -format jpg -quality 92 "$file" done The advanced version of the script would have to be able to handle spaces in file names and directory names preserve the original file names flatten PNG images if an alpha value is set compare the file size between the temporary converted image and its original determine if the difference is greater than a given precentage act accordingly The actual conversion could be done with imagemagick tools: convert -quality 92 -flatten -background white file.png file.jpg Unfortunately, my bash skills aren't even close to advanced enough to convert the scheme above into an actual script, but I am sure many of you can. My reputation points on here are pretty low, but I will gladly award the most helpful answer with the highest bounty I can set. References: http://www.formortals.com/introducing-cnb-imageguide/, http://www.turnkeylinux.org/blog/png-vs-jpg Edit: Also see my comments below for some more information on why I think this script would be the best solution to the problem I am facing.

    Read the article

  • Determining the health of a Cisco switch port?

    - by ewwhite
    I've been chasing a packet-loss and network stability issue for a handful of end-users on an internal network for the past few days... These issues surfaced recently, however, the location was struck by lightning six weeks ago. I was seeing 5-10% packet loss between a stack of four Cisco 2960's and several PC's and phones on the other side of a 77-meter run. The PC's were run inline with the phones over a trunked link. We were seeing dropped calls and interruptions in client-server applications and Microsoft Exchange connectivity. I tried the usual troubleshooting steps remotely, having a local technician do the following during breaks in user and production activity: change cables between the wall jack and device. change patch cables between the patch panel and switch port(s). try different switch ports within the 2960 stack. change end-user devices with known-good equipment (new phones, different PC's). clear switch port interface counters and monitor incrementing errors closely. (Pastebin output of sh int) Pored over the device logs and Observium RRD graphs. No link up/down issues from the switch side. change power strips on the end-user side. test cable runs from the Cisco 2960 using test cable-diagnostics tdr int Gi4/0/9 (clean)* test cable runs with a Tripp-Lite cable tester. (clean) run diagnostics on the switch stack members. (clean) In the end, it took three changes of switch ports to find a stable solution. The only logical conclusion is that a few Cisco 2960 switch ports are bad or flaky... Not dead, but not consistent in behavior either. I'm not used to seeing individual ports die in this manner. What else can I test or check to determine if these devices are bad? Is it common for single ports to have problems, rather than a contiguous bank of ports? BTW - show cable-diagnostics tdr int Gi4/0/14 is very cool... Interface Speed Local pair Pair length Remote pair Pair status --------- ----- ---------- ------------------ ----------- -------------------- Gi4/0/14 1000M Pair A 79 +/- 0 meters Pair B Normal Pair B 75 +/- 0 meters Pair A Normal Pair C 77 +/- 0 meters Pair D Normal Pair D 79 +/- 0 meters Pair C Normal

    Read the article

  • How to diagnose repeated "Starting up database '<dbname>'"

    - by Richard Slater
    I have a SQL 2008 server which is predominantly used as a development server, in the last two weeks it has been having occasional "fits", I have isolated the cause of these fits as CHECKDB being run almost continuiously, the following log information is logged to the Windows Event Log (Source: MSSQLSERVER, Category: Server): Event: 1073758961, Message: Starting up database 'DBName1'. Event: 1073758961, Message: Starting up database 'DBName2'. Event: 1073759397, Message: CHECKDB for database 'DBName1' finished without errors on 2010-07-19 20:29:26.993 (local time). This is an informational message only; no user action is required. Event: 1073759397, Message: CHECKDB for database 'DBName1' finished without errors on 2010-07-19 20:29:26.993 (local time). This is an informational message only; no user action is required. This is repeated every 1-2 seconds untill SQL Server is restarted or the offending databases are detatched. I initially thought that it was a problem with the databases so I took a backup and restored them to a SQL Express instance, all of the data is in tact, and CHECKDB runs without problem. The two databases that were causing a problem last week were not being used; so I took full backups of them and detached the databases, this resolved the problem. However at 0100 GMT this morning to other totally unrelated databases started showing the same problems. There is nothing in the event log to suggest that something happened to the server such as a restart, there are no messages about processes crashing or issues being detected with the storage controller. Speaking to the owner of the company this computer has suffered from "gremlins" in the past, however advice was taken and the motherboard was replaced and the computer rebuilt, memory and processor are the same. Stats: O/S: Windows 2008 Standard Build 6002 CPU: 2x Pentium Dual-Core E5200 @ 2.5GHz RAM: 2GB SQL: 2008 Standard 10.0.2531 Edit: someone posted then deleted a comment about AutoClose, it was turned on on the databases affected. It seems that best practice is to disable it so I have done that with the folllowing. EXECUTE sp_MSforeachdb 'IF (''?'' NOT IN (''master'', ''tempdb'', ''msdb'', ''model'')) EXECUTE (''ALTER DATABASE [?] SET AUTO_CLOSE OFF WITH NO_WAIT'')' I won't know if the problem recurs for some time so I am still open to further answers.

    Read the article

  • D-Link DIR-300 slows down / loses network

    - by basic6
    Hi there, there are 2 buildings (A and B). In bldg A is an open WLAN (which I'm allowed to use btw). In bldg B is a computer that I want to connect to that network. So I flashed an old D-Link DIR-300 AP with DD-WRT, mounted it to the wall (bldg B) near a window, attached a 13 dBi directional antenna (pointing to bldg A) and configured it as AP client in that wireless network. Then there's another AP, connected to the D-Link AP, acting as standard access point, which the computer is connected to. That's basically working so far, but: Every now and then the connection is lost. Not the connection between the computer and the D-Link (I can access the DD-WRT admin page normally) or the connection between the D-Link and the WLAN (in Status - Wireless it says it's still connected to the network), but when I want to access a web page (which only works if I'm connected to the wireless network from bldg A), my Firefox keeps "Looking for" (name resolution) without finding anything. When I reset the D-Link (power off, power on) in this situation, after some moments, everything's working fine again (Internet access). I've no idea why this is happening, but usually it's at most every few weeks (most times when nobody was using the computer, so no traffic). Compared to the connection speed when I connect directly to the WLAN in bldg A (Laptop), the speed in bldg B is rather slow, but I have the impression that this difference is worse in the last few days. A few minutes ago, I got 582 KB/s down and 911 KB/s up in bldg A (directly/laptop) and 84 KB/s down and 9 KB/s up in bldg B. The speed in bldg B used to be way higher (I remember 200 KB/s up) while the actual network speed in bldg A was lower than it is now (close to those 200). I'm aware that the wireless connection between those buildings should slow things down, but I'm wondering why this difference has become that extreme. Thanks for any tips... Update: I currently want to upload a large file (1.5 GB) via FTP (FileZilla). Since that caused the D-Link to disconnect (as described in first post), I took my laptop to bldg A, connected directly to the original WLAN (bypassing my D-Link) and tried the same upload. Guess what - same issue: At some point the connection is dead (at this point I would have reset my D-Link if I was connected to it). Just as the D-Link, my laptop is still connected, but not even name resolution is working ("Looking for..." in Firefox). After reconnecting, it's working again. Maybe my D-Link isn't the problem at all...

    Read the article

  • Windows 7 remains powered on when restarting

    - by BombDefused
    I'm running windows 7 x64 on an MSI P67A-GD53 motherboard, in an Antec P280 Super Midi Towercase with a Corsair 650w PSU. I've just installed a second instance of windows 7 x64 on a separate disk (this is to keep my games separate from my work OS). The problem is that it appears now that I cannot restart from either instance of Windows 7. The shut down command, and sleep commands work as expected. When I try to restart, the shutdown happens but the system never reboots. Everything remains powered on, until I hold down the power button to force the power off. Ithink (but am not 100% sure) this has only started since I installed the second OS, and am assuming this has something to do with the motherboard needing to know which OS to run up again? Some other forums I've read suggest that the PSU has a major role in restart and could be at fault. Changing the boot order of the disks in the BIOS does not change anything. Any suggestions greatfully recieved! Update: I now have a reproduceable issue: I think the secondary OS install may have been a red herring. It was when windows tried to reboot during the install that I noticed the issue. After playing around with installing drivers, and rebooting many many times, I have found that it is the OC genie setting on the MSI motherboard that seems to trigger the problem. This makes sense as I only started using the OC genie feature a couple of weeks ago, and probably hadn't used restart in that time. However... simply turning off OC genie does not make the issue go away. I have to turn off OC genie, shutdown, start enter bios, go to the "Save and Exit" menu "Restore Defaults" yes to "Load optimized defaults", which will reset to clear the problem. Now when the PC boots into windows, I can restart as normal (and from the OS on either HDD). I only know how to control the issue, and don't still know the root cause. I'd like to be able to use the OC genie function if anyone can suggest a why I'm seeing this problem. Could it be that I'm drawing too much power when using OC feature?

    Read the article

  • Downloading Python 2.5.4 (from official website) in order to install it

    - by brilliant
    I was quite hesitant about whether I should post this question here on "StackOverflow" or on "SuperUser", but finally decided to post it here as Python is more a programming language rather than a piece of software. I've been recently using Python 2.5.4 that is installed on my computer, but at the moment I am not at home (and won't be for about two weeks from now), so I need to install the same version of Python on another computer. This computer has Windows XP installed – just like the one that I have at home. The reason why I need Python 2.5.4 is because I am using “Google App Engine”, and I was told that it only supports Python 2.5 However, when I went to the official Python page for the download, I discovered that certain things have changed, and I don’t quite remember where exactly from that site I had downloaded Python 2.5.4 on my computer at home. I found this page: http://www.python.org/download/releases/2.5.4/ Here is how it looks: (If you can’t see it here, please check it out at this address: http://brad.cwahi.net/some_pictures/python_page.jpg ) A few things here are not clear to me. It says: For x86 processors: python-2.5.4.msi For Win64-Itanium users: python-2.5.4.ia64.msi For Win64-AMD64 users: python-2.5.4.amd64.msi First of all, I don’t know what processor I am using – whether mine is “x86” or not; and also, I don’t know whether I am an “Win64-Itanium” or an “Win64-AMD64” user. Are Itanium and AMD64 also processors? Later it says: Windows XP and later already have MSI; many older machines will already have MSI installed. I guess, it is my case, but then I am totally puzzled as to which link I should click as it seems now that I don’t need those three previous links (as MSI is already installed on Windows XP), but there is no fourth link provided for those who use “Windows XP” or older machines. Of course, there are these words after that: Windows users may also be interested in Mark Hammond's win32all package, available from Sourceforge. but it seems to me that it is something additional rather than the main file. So, my question is simple: Where in the official Python website I can download Python 2.5.4, precisely, which link I should click?

    Read the article

  • Signup with email authentication, only 30% are activated?

    - by mysqllearner
    I have a website which let users to sign up. The signup process including sending "activation email", click link to activate account. The first two weeks was fine. Out of around 2000 users, 1800 users are activated. After that, the activated users drop drastically, to about 30%. Example: 1000 users signup, only 300 were activated. At first, I found the problem is because the email could not be reach to ymail, msn and gmail users. (Most of my subscribers are Ymail (yahoo), hotmail/msn(live) and gmail (gmail)). I tried signup using ymail and hotmail, but i didnt get any activation email. I contacted yahoo and msn, eventually my email can go through now. However, my signup statistic still showing, the activated users are only about 30%, which very confuse me. I contact my hosting company, ask them the whitelist my IP. And they did it. I need your advice/help on following questions: How to check where the problem lies? Is the email not delivered? User receive email but didnt click the activation link? I am using php mail funstion. and this is my headers: $headers = 'MIME-Version: 1.0' . "\r\n"; $headers .= 'Content-type: text/html; charset=UTF-8' . "\r\n"; $headers .= 'From: Admin <[email protected]>' . "\r\n"; $headers .= 'Return-Receipt-To: Bounce <[email protected]>' . "\r\n"; $headers .= 'Reply-To: Admin <[email protected]>' . "\r\n"; $return_path = "[email protected]"; Is there anything wrong with the headers? What can I do to improve my registration/signup activation process?

    Read the article

  • Critique My Backup and Storage Plan

    - by MetaHyperBolic
    My current storage (RAID-1 off of a hardware RAID card) and backup (a spare drive) solutions for my home network are inadequate. I have too much data scattered on various one-off drives. It is time to evolve. Backups seem simple enough, at least: lots of big drives. However, I am bewildered by the number of choices for small home storage. The Drobo S looks appealing. So does the ReadyNAS. I am not looking for bunches of shiny features, I'm mostly interested in reliability. I am not interested in building Yet Another PC to create a file server or doing something in the cloud, or whatever. I'm stupid, so I am keeping it simple. Requirements for Main Volume: Starting working space roughly 2TB, with options for growth up to 5TB RAID or something RAID-like with at least one parity drive eSATA II for speed during backups Ability to shut down gracefully when alerted of low power by a UPS Optional but Desirable: Will take 2TB drives now with options for the larger 3TB drives coming in 2010-2011 Optional but Desirable: : RAID-6 or something similar, with two parity drives Optional but Desirable: : Hot spare Ethernet connection not required, as the volume will be shared via the same machines which runs my home print server Backups: Backup performed via ROBOCOPY in mirror mode to an external hard drive via a eSATA II connection. Start with rotating between two external 2TB hard drives, will go up to six external 2TB drives. Start with a weekly backup, move to a bi-weekly backup as more drives are added. Move to 3TB drives as the size of my main volume increases. Backup drives will be stored on an off-site location. Hard drives: I plan on buying all of the same model, but different batches from different vendors. I found a "burn-in" utility with which I can pound away on the drives for a couple of weeks before adding them to the backup pool or the main volume. I estimate that I am looking at roughly $1,500 to start, once I start throwing in two TB drives for backup and four for storage. So, are there any obvious flaws in my plan? What have I overlooked? Any suggestions for the storage device for my main volume that fits my requirements? Or do I just keep it simple, 2 drives in RAID-1, then perform due diligence with my backups, accepting that I will have to buy a whole new unit when my data grows past 2TB?

    Read the article

  • Puppet - Possible to use software design patterns in modules?

    - by Mike Purcell
    As I work with puppet, I find myself wanting to automate more complex setups, for example vhosts for X number of websites. As my puppet manifests get more complex I find it difficult to apply the DRY (don't repeat yourself) principle. Below is a simplified snippet of what I am after, but doesn't work because puppet throws various errors depending up whether I use classes or defines. I'd like to get some feed back from some seasoned puppetmasters on how they might approach this solution. # site.pp import 'nodes' # nodes.pp node nodes_dev { $service_env = 'dev' } node nodes_prod { $service_env = 'prod' } import 'nodes/dev' import 'nodes/prod' # nodes/dev.pp node 'service1.ownij.lan' inherits nodes_dev { httpd::vhost::package::site { 'foo': } httpd::vhost::package::site { 'bar': } } # modules/vhost/package.pp class httpd::vhost::package { class manage($port) { # More complex stuff goes here like ensuring that conf paths and uris exist # As well as log files, which is I why I want to do the work once and use many notify { $service_env: } notify { $port: } } define site { case $name { 'foo': { class 'httpd::vhost::package::manage': port => 20000 } } 'bar': { class 'httpd::vhost::package::manage': port => 20001 } } } } } That code snippet gives me a Duplicate declaration: Class[Httpd::Vhost::Package::Manage] error, and if I switch the manage class to a define, and attempt to access a global or pass in a variable common to both foo and bar, I get a Duplicate declaration: Notify[dev] error. Any suggestions how I can implement the DRY principle and still get puppet to work? -- UPDATE -- I'm still having a problem trying to ensure that some of my vhosts, which may share a parent directory, are setup correctly. Something like this: node 'service1.ownij.lan' inherits nodes_dev { httpd::vhost::package::site { 'foo_sitea': } httpd::vhost::package::site { 'foo_siteb': } httpd::vhost::package::site { 'bar': } } What I need to happen is that sitea and siteb have the same parent "foo" folder. The problem I am having is when I call a define to ensure the "foo" folder exists. Below is the site define as I have it, hopefully it will make sense what I am trying to accomplish. class httpd::vhost::package { File { owner => root, group => root, mode => 0660 } define site() { $app_parts = split($name, '[_]') $app_primary = $app_parts[0] if ($app_parts[1] == '') { $tpl_path_partial_app = "${app_primary}" $app_sub = '' } else { $tpl_path_partial_app = "${app_primary}/${app_parts[1]}" $app_sub = $app_parts[1] } include httpd::vhost::log::base httpd::vhost::log::app { $name: app_primary => $app_primary, app_sub => $app_sub } } } class httpd::vhost::log { class base { $paths = [ '/tmp', '/tmp/var', '/tmp/var/log', '/tmp/var/log/httpd', "/tmp/var/log/httpd/${service_env}" ] file { $paths: ensure => directory } } define app($app_primary, $app_sub) { $paths = [ "/tmp/var/log/httpd/${service_env}/${app_primary}", "/tmp/var/log/httpd/${service_env}/${app_primary}/${app_sub}" ] file { $paths: ensure => directory } } } The include httpd::vhost::log::base works fine, because it is "included", which means it is only implemented once, even though site is called multiple times. The error I am getting is: Duplicate declaration: File[/tmp/var/log/httpd/dev/foo]. I looked into using exec, but not sure this is the correct route, surely others have had to deal with this before and any insight is appreciated as I have been grappling with this for a few weeks. Thanks.

    Read the article

  • IE and Google Chrome timeout on an IIS6 hosted SSL page that Firefox handles well.

    - by Thomas
    Ok, here's the scenario: Up until a few weeks ago, none of us noticed anything wrong with the corporate website. People were using it without complaint. Then, a client complained that a specific page on the site was timing out for him, and only when he committed a POST action on a form filled with data. I checked it out, and it timed out for me, too. But, it only timed out in Google Chrome and IE, not in Firefox. Additionally, the same page, on the same server, but served from a different domain name (one not under the protection of SSL, either) does not time out under any browser. To clarify: https://www.mysite.com/changes.php times out on POST, but the same with http works fine. That distinction (SSL vs. Non-SSL) seems to be important, as nothing else has changed. Our certificate is valid, and Firefox detects no errors thrown by the page. I've looked at the Request and Response headers from the page, and they all follow the correct formats. Then, after wandering through the site, I noticed a few other things. Both IE and Chrome will frequently time out on any page that is PHP-based. They never time out on static images or html files. I've looked at the site from a variety of different servers, my home and work workstations, and my netbook. Because of that, I've discounted a viral infection, as I highly doubt a virus is going to hit every one of the machines to which I have access in exactly the same manner. My setup is: Server: Win2k3, II6, PHP 5.2.9-1. Clients: IE7, IE8, Chrome (regular and dev channel): Frequent timeouts on PHP pages. Firefox 2, Firefox 3: No timeouts. Firebug shows no errors or even lengthy periods serving the pages. I've spent 2 days searching for any tech knowledge that I can find, and my search parameters are all too general. Everyone has problems loading SSL pages in IE and Chrome for a wide variety of reasons. The infrequent nature of the timeouts and the fact that there are no errors being reported anywhere is starting to drive me insane. Does anyone have any insight on a problem like this?

    Read the article

  • Splunk is fantastically expensive: What are the alternatives? [closed]

    - by samsmith
    Possible Duplicate: Alternatives to Splunk? This has been discussed, but it has been several months, so it may be time to revisit it: Earlier discussion RE Splunk alternatives For the record, Splunk rocks. But the pricing is simply beyond what we can consider (When I spoke with Splunk today, the cost for a system to index 5gb/day of data is over $30,000.) That is more than we spend on SQL Server (by a large multiple), more than we spend on a rack of servers (by a multiple), etc. etc. The splunk sales team is correct (that for $30K we get more value and functionality than if we spend the same building our own system), but it doesn't matter. The splunk cost is simply too high (by a multiple). Soooooo, we are looking around! Is anyone out there building a splunk like system? Our basic need: Able to listen for syslog messages on multiple udp ports Able to index the incoming data in an async way Some kind of search engine Some kind of UI An API to the search engine (to embed in our console) We currently need to index 3-5gb/day, but need to be able to scale to 10gb/day or more. We do not need a lot of history (30 days is fine). We use Windows 2008 and 2003 servers. Thanks for your thoughts! UPDATE: We spent two weeks researching commercial and open source options. Our conclusion: Write our own (we are a software company... we know how to write things). We built a great system built on mongodb and .NET that gives us the functions we needed from MongoDB in about one engineering week. We have now completed our implementation. We use two Mongodb servers (master and slave), and are able to log and index any amount of log data (5gb/day, 15gb/day, etc), limited only by disk space. OBSERVATIONS: This space needs a solid solution that is $1000-3000 flat rate. The licensing models used by the commercial firms are based on a "milk the data center ops guys" models. That is their right (of course!), but it leaves a HUGE space open for someone to come in underneath them. My guess is that in another year or two there will be a good open source solution that will be really usable. Thank you all for your input (even if it was self promotion).

    Read the article

  • System Requirements of a write-heavy applications serving hundreds of requests per second

    - by Rolando Cruz
    NOTE: I am a self-taught PHP developer who has little to none experience managing web and database servers. I am about to write a web-based attendance system for a very large userbase. I expect around 1000 to 1500 users logged-in at the same time making at least 1 request every 10 seconds or so for a span of 30 minutes a day, 3 times a week. So it's more or less 100 requests per second, or at the very worst 1000 requests in a second (average of 16 concurrent requests? But it could be higher given the short timeframe that users will make these requests. crosses fingers to avoid 100 concurrent requests). I expect two types of transactions, a local (not referring to a local network) and a foreign transaction. local transactions basically download userdata in their locality and cache it for 1 - 2 weeks. Attendance equests will probably be two numeric strings only: userid and eventid. foreign transactions are for attendance of those do not belong in the current locality. This will pass in the following data instead: (numeric) locality_id, (string) full_name. Both requests are done in Ajax so no HTML data included, only JSON. Both type of requests expect at the very least a single numeric response from the server. I think there will be a 50-50 split on the frequency of local and foreign transactions, but there's only a few bytes of difference anyways in the sizes of these transactions. As of this moment the userid may only reach 6 digits and eventid are 4 to 5-digit integers too. I expect my users table to have at least 400k rows, and the event table to have as many as 10k rows, a locality table with at least 1500 rows, and my main attendance table to increase by 400k rows (based on the number of users in the users table) a day for 3 days a week (1.2M rows a week). For me, this sounds big. But is this really that big? Or can this be handled by a single server (not sure about the server specs yet since I'll probably avail of a VPS from ServInt or others)? I tried to read on multiple server setups Heatbeat, DRBD, master-slave setups. But I wonder if they're really necessary. the users table will add around 500 1k rows a week. If this can't be handled by a single server, then if I am to choose a MySQL replication topology, what would be the best setup for this case? Sorry, if I sound vague or the question is too wide. I just don't know what to ask or what do you want to know at this point.

    Read the article

  • Malicious program changing my DNSs

    - by julio.alegria
    Some weeks ago I started having problems with my internet connection, it was extremely slow and suddently some websites (specifically gmail, facebook, youtube and twitter) started failing to connect, while the rest connect normally. Some days after, those same websites started showing me a message in portuguese: "Nova atualização disponível" whenever I tried to connect and a .exe file started downloading ("internet_update.exe" or something like that). That's when I freaked out! It was definitely a virus or something like that, but it was really weird because I never had a problem like that (I run Linux). So I turned on my old PC (running Windows XP) and it turned out it had the same problem! the same message was showed whenever I tried to connect one of those specific websites, while the rest loaded without problems. Even in my Android smarthphone the same message was showed. So it was obvious that the problem was not in a particular machine but in the router itself. So I started googling and I found some information, unfortunately I only found some in spanish, so I will make you a short summary: It is a new banking trojan developed specifically to infect and collect information from Brasilian banks. Apparently now it has expanded to Argentina and Peru. So how does it work? It spreads through social networks (videos, links, ...) and then it "takes control" of your internet connection by changing the values of your DNSs. More specifically, it changes the Primary DNS to one of this IPs: 108.170.13.38, 66.7.216.122 or 63.143.43.154 and the Secondary DNS to 8.8.8.8, this secondary DNS is actually the Google Public DNS, and it is configured this way so that your internet connection continue working properly and the user does not notice anything. The important part here is that because no download or install has been made in your machine, no antivirus will notice any change. After your DNSs have been changed, the trojan controls every single website you connect to and this way they steal your bank information. So after reading about this I accesed to my router and I restored my Primary and Secondary DNSs to their proper values, but one day after I had the same problem again. This is actually a 50% warning post - 50% help me! post. So, here comes the question: Is there any possible way to prevent my DNSs of being changed?

    Read the article

  • Our VPS is being used as a Warez mule

    - by Mikuso
    The company I work for runs a series of ecommerce stores on a VPS. It's a WAMP stack, 50gb storage. We use an archaic piece of ecommerce software which operates almost entirely client-side. When an order is taken, it writes it to disk and then we schedule a task to download the orders once every 10 minutes. A few days ago, we ran out of disk space, which caused orders to fail to be written. I quickly hopped on to delete some old logs from the mailserver and freed up a couple of GB pretty quickly, but I wondered how we could fill up 50gb will nothing much more than logs. Turns out, we didn't. Hidden deep within the c:\System Volume Information directory, we have a stack of pirated videos, which seem to have appeared (looking at the timestamps) over the past three weeks. Porn, American Sports, Australian cooking shows. A very odd collection. Doesn't look like an individual's personal tastes - more like the VPS is being used as a mule. We have a 5-attempts and you're blocked policy on our FTP server (plus, there is no FTP account with access to that directory), and the windows user account has had it's password changed recently. The main avenues are sealed - and logs can verify that. I thought I'd watch and see if it happened again, and yes, another cooking show has appeared this morning. I am the only one to know of this problem at my company, and only one of two with access to the VPS (the other being my boss, but no - it's not him). So how is this happening? Is there a vulnerability in some of the software on the VPS? Are the VPS owners peddling warez across our rented space? (can they do this?) I don't want to delete the warez in case it is seen as a hostile action against this outside force, and they choose to retaliate. What should I do? How do I troubleshoot this? Has this happened to anyone else before?

    Read the article

  • Problems when trying to connect to a router wirelessly

    - by Ruud Lenders
    The situation - At my girlfriend's parents' place there are six Windows 7 devices that are wired or wireless connected to a router: 3 dekstops and 3 laptops. There are also several smartphones using the router. The router is secured with WPA2 (AES). The problem - We never had any problems with the router for over a year. But recently - about 3 weeks ago - my girlfriend's laptop (HP) and my laptop (ASUS) started to develop problems while trying to connect to the router. The router has stopped showing up from the network list. Sometimes it comes back and shows up, but then it keeps saying something along the lines of "Could not connect", and not long after that it dissapears again. The range of the router is not the problem here, because we experience the same when we sit next to the router. Sometimes, if we are lucky, and waited a long time (10-15 minutes) without using the laptop for anything, the laptop will eventually succesful connect to the router. The attempts - Of course, the Window 7 troubleshooter. We tried troubleshooting the connection problems and the wireless network adapter, but no luck. We also reset the router enough times to know that's not helping either. Here's the full list of things we tried, but did not help: Running the Windows 7 troubleshooter Resetting the router (more than once) Setting the router settings to factory defaults Disconnecting all other devices except one laptop Applying a system restore Trying static/dynamic IP/DNS - Dynamic is better, right? Enabling/disabling IPv6 - Should I keep IPv6 disabled? Running the command: netsh wlan stop hostednetwork Running the command: netsh wlan set hostednetwork mode=disallow Updating/reïnstalling wireless adapter drivers The tests - To help finding the core of the problem, we tested the following: Plugging an ethernet cable in the router and in our laptops - worked fine Connecting someone else's laptop to the router (wireless) - worked fine Connecting our laptops to someone else's router - worked fine The router - This information might be relevant: Router model: Sitecom 300N Wireless Router Router hardware: version 01 The DCHP Server's IPs range from 192.168.0.100 to 192.168.0.200. Router settings: Wireless channel: 12 Channel bandwidth: 20/40 MHz Extension channel: 8 Preamble type: Long 802.11g protection: Disabled UPnP: Enabled The laptops - If you are wondering about our laptops: My laptop model: ASUS Pro64JQ Girlfriend's laptop: HP Pavillion G6 OS: Both Windows 7 Professional x64 - with Service Pack 1 My wireless adapter: Atheros AR9285 AdHoc 11n: Enabled The question - Does anyone have experienced the same problems as I do? Or does someone know how to solve this? Are there more tricks I can try, or settings I should change? Note - Our laptops are not slow or old. My laptop is 1.5 years old, and the other laptop is just 5 months old. I know how to keep laptops clean and I'm pretty sure both laptops are not bloated with useless software.

    Read the article

  • IE and Google Chrome timeout on an IIS6 hosted SSL page that Firefox handles well

    - by Thomas
    Ok, here's the scenario: Up until a few weeks ago, none of us noticed anything wrong with the corporate website. People were using it without complaint. Then, a client complained that a specific page on the site was timing out for him, and only when he committed a POST action on a form filled with data. I checked it out, and it timed out for me, too. But, it only timed out in Google Chrome and IE, not in Firefox. Additionally, the same page, on the same server, but served from a different domain name (one not under the protection of SSL, either) does not time out under any browser. To clarify: https://www.mysite.com/changes.php times out on POST, but the same with http works fine. That distinction (SSL vs. Non-SSL) seems to be important, as nothing else has changed. Our certificate is valid, and Firefox detects no errors thrown by the page. I've looked at the Request and Response headers from the page, and they all follow the correct formats. Then, after wandering through the site, I noticed a few other things. Both IE and Chrome will frequently time out on any page that is PHP-based. They never time out on static images or html files. I've looked at the site from a variety of different servers, my home and work workstations, and my netbook. Because of that, I've discounted a viral infection, as I highly doubt a virus is going to hit every one of the machines to which I have access in exactly the same manner. My setup is: Server: Win2k3, II6, PHP 5.2.9-1. Clients: IE7, IE8, Chrome (regular and dev channel): Frequent timeouts on PHP pages. Firefox 2, Firefox 3: No timeouts. Firebug shows no errors or even lengthy periods serving the pages. I've spent 2 days searching for any tech knowledge that I can find, and my search parameters are all too general. Everyone has problems loading SSL pages in IE and Chrome for a wide variety of reasons. The infrequent nature of the timeouts and the fact that there are no errors being reported anywhere is starting to drive me insane. Does anyone have any insight on a problem like this?

    Read the article

  • Changing Physical Path gives blank homepage

    - by Julie
    I have two websites ASP Classic - www.company.com and www.companytesting.com. At this time of year, company.com is pointed to a folder called website2012 and companytesting.com is pointing to a folder called website2013. The contents of those two folders are almost identical, just minor changes for our season change (which I was supposed to do today - lol). Up until a couple of weeks ago, I was running Windows Server 2003. To update the "live" website, I'd make a copy of the test site folder, and rename it website2013R1, and point the test site there, then point the live site at website2012. We now have Windows Server 2008 R2 64. (I had someone migrate the websites to the new server for me.) The companytesting.com site, when I pointed it to website2013R1, worked fine. The company.com site, when I pointed it to website2013 (which worked just before, for the companytesting.com site) gives an empty page. (i.e. view source = nothing there.) There is nothing in the failed request log when this happens. I can use the Explore button/link (upper right) in IIS7.5 and see all of the files there. If I use the browse button (either in general or on the index.asp page) I get the blank page again. One weirdness about how these are set up is that companytesting.com uses a login (which I think is windows authentication - it's simply a single username and password for staff, and to keep the GoogleBots out of it). Obviously, company.com does not. But redirecting the to website2013r1 kept the login in place. (So I'm not absolutely clear whether that's attached to the folder or to the site. Hitting the company.com site after changing the path did not yield a password request.) The permissions on the folders all seem to be the same, but obviously, I'm missing something. Why isn't changing the physical path working? As is probably obvious, I'm not knowledgeable about servers. I did OK in 2003, but since it's not my main task and I'm buried right now, I have barely looked at 2008. So I may have really stupid questions when you ask me to check something.

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >