Search Results

Search found 1696 results on 68 pages for 'textbook mistake'.

Page 45/68 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • How do I uninstall a ruby version installed via source?

    - by Aaron McIver
    I installed a version (1.9.3-p194) of ruby via source using make install and realized this may have been the wrong route to take. Upon doing this, I realized this was a mistake and I should be using a solution such as rvm to address my ruby versions within the OS. I looked to see if an uninstall existed to be ran in conjunction with make and it didn't. I then proceeded to install rvm and add the aforementioned version in to my list of managed rubies within rvm which is not listed as ext-ruby-1.9.3-p194. rvm rubies ext-ruby-1.9.3-p194 [ x86_64 ] =* ruby-1.9.3-p194 [ x86_64 ] # => - current # =* - current && default # * - default** When I perform an rvm remove, it simply removes it from the rubies list however it still exists within /usr/local/bin. I am not concerned with the system install ruby version residing in /usr/bin as I understand that is tied to the OS and should simply be ignored. How can I safely uninstall/remove the aforementioned version and all the places in which it was installed, short of looking at the install script?

    Read the article

  • Bandwidth monitoring with iptables for non-router machine

    - by user1591276
    I came across this tutorial here that describes how to monitor bandwidth using iptables. I wanted to adapt it for a non-router machine, so I want to know how much data is going in/coming out and not passing through. Here are the rules I added: iptables -N ETH0_IN iptables -N ETH0_OUT iptables -I INPUT -i eth0 -j ETH0_IN iptables -I OUTPUT -o eth0 -j ETH0_OUT And here is a sample of the output: user@host:/tmp$ sudo iptables -x -vL -n Chain INPUT (policy ACCEPT 1549 packets, 225723 bytes) pkts bytes target prot opt in out source destination 199 54168 ETH0_IN all -- eth0 * 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 1417 packets, 178128 bytes) pkts bytes target prot opt in out source destination 201 19597 ETH0_OUT all -- * eth0 0.0.0.0/0 0.0.0.0/0 Chain ETH0_IN (1 references) pkts bytes target prot opt in out source destination Chain ETH0_OUT (1 references) pkts bytes target prot opt in out source destination As seen above, there are no packet and byte values for ETH0_IN and ETH0_OUT, which is not the same result in the tutorial I referenced. Is there a mistake that I made somewhere? Thanks for your time.

    Read the article

  • Ubuntu+Win7--disk error press any key to restart

    - by Siddharth
    Apparently,none of the solutions in any other posts and forums worked for me For some reasons I decided to remove ubuntu from my hard disk drive. My partition table(presently): (/dev/sda1) (fat32) 900 MiB ---(MBR,I suppose) (/dev/sda2) (ntfs) 70 GiB -----(Windows 7) (/dev/sda3) (ntfs) 314.88 GiB --(Personal File storage) (/dev/sda4) (ext4) 80 GiB -----(Ubuntu 13.04) (unallocated) -----1.31 MiB So,after moving(cut-paste) everything(for backup) from the fat32 partition using win7..I booted into Ubuntu and copied the remaining 3 files(hidden in Win7 file explorer) --bootmgr,bootsect.bak,and one more which I do not remember.TERRIBLE MISTAKE After this I again booted into Windows and deleted ext4 partition..formatted it to ntfs..and shut down the pc.Then,I put in a Win7 bootable USB..using command prompt I entered bootrec /fixmbr,and bootrec /fixboot.. Restarting showed me the GRUB..choosing windows 7 showed me "Disk Error. Press any key to restart." I also installed a fresh Win7 installation on the 80 GiB partition expecting a Windows Legacy Bootloader with two win7 options..but did not work. Then..I used a Ubuntu LiveUSB to put it back to the present configuration(above) since all methods to restore the MBR failed.. I copied back the fat32 partitions backup files but couldn't copy those 3 files.Somehow ,they had been recreated and were non-replaceable. I do not want to format the win7 partition for a fresh one. I have used boot-repair..Restore MBR option brings back to "Disk error...." without even going through grub..so I reinstalled grub and I'm able to boot into Ubuntu. grub menu shows the win7 option as "Windows 7 (loader) (on /dev/sda1)". paste.ubuntu.com/5753710 paste.ubuntu.com/5775999

    Read the article

  • Uninstall Glassfish and metro completely

    - by user775829
    I thought of updating my Glassfish server from 2.1 to 3.1.1 in a Linux machine. I downloaded the .ZIP package. However during uninstalling of Glassfish v2.1 I did not find the uninstall.sh file in "bin" directory. Following are a few steps which I did... I removed the glassfish folder (rm -rf ...) After removing files in the end it gave me a notification that it could not remove 2 files used by Metro. I cant recollect those file names, but I manually deleted that folder. I made a mistake by first not uninstalling Metro. I uninstalled metro completely after that. but it seemed pointless (it uninstalled successfully :P ) I transfered the Glassfish 3.1.1 ZIP file and unzipped and configured it. FOllow are a few Problems I am facing I cannot deploy any of my WAR file. Its giving errors saying " Error creating bean,Instantiation of bean failed etc etc." (However the WAR file is getting deployed successfully in other Linux Machine) When I try installing Metro v2.1 separately, it does not show the admin console or it timesout while starting the domain. The Log File of the Domain says it has started the domain successfully and the process is also created. But after running the command (asadmin) it takes like forever and times out without showing Domain Started Successfully, There is no uninstall.sh in Glassfishv3.1.1 bin directory. How do I completely uninstall Glassfish v 3.1.1 and Metro 2.1 ??? What are the files which I will have to manually remove?

    Read the article

  • Ubuntu IP Configuration - multiple subnets & interfaces

    - by HaydnWVN
    Have a 'new' mailserver running postfix on Ubuntu. We are having some problems configuring the subnets & interfaces. Basically 2 subnets (.253. & .254.) need to be connected through the 3rd subnet (.252.) where the Router is residing. # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet static address 10.62.254.199 netmask 255.255.0.0 network 10.62.254.0 broadcast 10.62.255.255 #gateway 10.62.252.138 # dns-* options are implemented by the resolvconf package, if installed dns-nameservers 10.62.252.138 dns-search ***.com auto eth1 iface eth1 inet static address 10.62.253.199 netmask 255.255.0.0 network 10.62.253.0 broadcast 10.62.255.255 #gateway 10.62.252.138 #dns-nameservers 10.62.254.199 10.62.253.199 10.62.252.199 dns-nameservers 10.62.252.138 dns-search ***.com auto eth2 iface eth2 inet static address 10.62.252.199 netmask 255.255.0.0 network 10.62.252.0 broadcast 10.62.255.255 gateway 10.62.252.138 #dns-nameservers 10.62.254.199 10.62.253.199 10.62.252.199 dns-search ***.com I have an external support company who are looking into this (they built and configured this server), but it's taking far too long... So I'm looking to highlight the mistake!

    Read the article

  • Frequency of RAM

    - by Vignesh Palani
    I have a very old system hp compaq dx2080. It had 1gb of RAM. I recently bought a EVM DDR2 1 GB PC RAM which had a clock rate of 667Mhz. I have dual booting windows 7 and 8. When I installed it, windows 7 was still using the older 1gb. It showed as 2 gb available and 1gb usable in system properties. I searched around and found that I can change it to max in the msconfig. I did so. I set it 2048. Still, it was using only 1gb. When, I switched to 8, it was using the 2gb. Now, for my question: My system only supported 553Mhz and 667Mhz RAM. In the BIOS, I saw that the new RAM was showing as 800Mhz. Rechecked using speccy and cpu-z. It showed different values between the two. The RAM is labeled as 667Mhz over it. No mistake in that. But, am I missing something? Please help. And, can I continue using it? My point again. There are only two slots.

    Read the article

  • Big Oh Notation - formal definition.

    - by aloh
    I'm reading a textbook right now for my Java III class. We're reading about Big-Oh and I'm a little confused by its formal definition. Formal Definition: "A function f(n) is of order at most g(n) - that is, f(n) = O(g(n)) - if a positive real number c and positive integer N exist such that f(n) <= c g(n) for all n = N. That is, c g(n) is an upper bound on f(n) when n is sufficiently large." Ok, that makes sense. But hold on, keep reading...the book gave me this example: "In segment 9.14, we said that an algorithm that uses 5n + 3 operations is O(n). We now can show that 5n + 3 = O(n) by using the formal definition of Big Oh. When n = 3, 5n + 3 <= 5n + n = 6n. Thus, if we let f(n) = 5n + 3, g(n) = n, c = 6, N = 3, we have shown that f(n) <= 6 g(n) for n = 3, or 5n + 3 = O(n). That is, if an algorithm requires time directly proportional to 5n + 3, it is O(n)." Ok, this kind of makes sense to me. They're saying that if n = 3 or greater, 5n + 3 takes less time than if n was less than 3 - thus 5n + n = 6n - right? Makes sense, since if n was 2, 5n + 3 = 13 while 6n = 12 but when n is 3 or greater 5n + 3 will always be less than or equal to 6n. Here's where I get confused. They give me another example: Example 2: "Let's show that 4n^2 + 50n - 10 = O(n^2). It is easy to see that: 4n^2 + 50n - 10 <= 4n^2 + 50n for any n. Since 50n <= 50n^2 for n = 50, 4n^2 + 50n - 10 <= 4n^2 + 50n^2 = 54n^2 for n = 50. Thus, with c = 54 and N = 50, we have shown that 4n^2 + 50n - 10 = O(n^2)." This statement doesn't make sense: 50n <= 50n^2 for n = 50. Isn't any n going to make the 50n less than 50n^2? Not just greater than or equal to 50? Why did they even mention that 50n <= 50n^2? What does that have to do with the problem? Also, 4n^2 + 50n - 10 <= 4n^2 + 50n^2 = 54n^2 for n = 50 is going to be true no matter what n is. And how in the world does picking numbers show that f(n) = O(g(n))? Please help me understand! :(

    Read the article

  • Why this search can not generate correct result?

    - by user482742
    Hi, All: Below is to find same customer and if he is in list, the number add one. If he is not in the list, just add him in the list. I use Search function to do this, but failed and generated incorrect records. It can not find the customer or the right number of customers. But if I use For..loop to iterate the list, it does well and can find the customer and add new customer in that for..loop search procedure. (I did not paste for ..loop search procedrue here). Another problem is that there is no difference between setting list.sorted true and false. It seems Search function is not correct. This search function is from an example of delphi textbook. The below is with Delphi 7. Thank you. Procedure Form1.create; begin list:=Tstringlist.create; list.sorted:=true; // Search function will generate exactly Same and Incorrect //records no matter list.sorted is set true or false. list.duplicates:=dupignore; .. end; Procedure addcustomer; var .. begin while p1.MatchAgain do begin //p1 is regular expression customer:=p1.MatchedExpression; if (search(customer)=false) then begin list.Add(customer+'=1'); end; allcustomer:=allcustomer+1; .. end; Function Tform1.search(customer: string): boolean; var fre:string; num:integer; L:integer; R:integer; M: Integer; CompareResult: Integer; found: boolean; begin result:=false; found:=false; L := 0; R := List.Count - 1; while (L <= R) and ( not found ) do begin M := (L + R) div 2; CompareResult := Comparetext(list.Names[m]), customer); if (compareresult=0) then begin fre:=list.ValueFromIndex [m]; num:=strtoint(fre); num:=num+1; list.ValueFromIndex[m]:=inttostr(num); Found := True; Result := true; exit; end else if compareresult > 0 then r := m - 1 else l := m + 1; end; end;

    Read the article

  • Windows7: Howto force a "do you really want to shutdown?" dialog

    - by Vokuhila-Oliba
    Sometimes I wanted to choose "logout current user", but then I hit "Shutdown" by accident. Nearly everywhere else Windows7 is asking "do you really want to do this Yes/No". But that's not the case when I hit the "Shutdown" button. Windows7 shuts down immediately without giving me the chance to correct my mistake. Q1: So I am wondering why Windows7 shuts down immediately without asking "really do that?" in this case. Q2: Is there a way to change this behavior? E.g. Could I force Windows7 to display a dialog asking "Do you really want to shutdown ?". ADDED: I am running Windows7 Professional 64bit. ADDED: I tried to change this behavior with the policy editor. It seems to be very easy to completely remove the Shutdown button from the start menu. But I couldn't find an entry to turn on such a Yes/No dialog.

    Read the article

  • SQL Server 2012 memory usage steadily growing

    - by pgmo
    I am very worried about the SQL Server 2012 Express instance on which my database is running: the SQL Server process memory usage is growing steadily (1.5GB after only 2 days working). The database is made of seven tables, each having a bigint primary key (Identity) and at least one non-unique index with some included columns to serve the majority of incoming queries. An external application is calling via Microsoft OLE DB some stored procedures, each of which do some calculations using intermediate temporary tables and/or table variables and finally do an upsert (UPDATE....IF @@ROWCOUNT=0 INSERT.....) - I never DROP those temporary tables explicitly: the frequency of those calls is about 100 calls every 5 seconds (I saw that the DLL used by the external application open a connection to SQL Server, do the call and then close the connection for each and every call). The database files are organized in only one filgegroup, recovery type is set to simple. Some questions to diagnose the problem: is that steadily growing memory normal? did I do any mistake in database design which probably lead to this behaviour? (no explicit temp-table drop, filegroup organization, etc) can SQL Server manage such a stored procedure call rate (100 calls every 5 seconds, i.e. 100 upsert every 5 seconds, beyond intermediate calculations)? do the continuous "open connection/do sp call/close connection" pattern disturb SQL Server? is it possible to diagnose what is causing such a memory usage? Perhaps queues of wating requests? (I ran sp_who2, but I didn't see a big amount of orphan connections from the external application) if I restrict the amount of memory which SQL Server is allowed to use, may I sooner or later get into trouble?

    Read the article

  • Run a shell script using cron

    - by Blanca
    Hi! I have this FeedIndexer.sh: #!/bin/sh java -jar FeedIndexer.jar Just to run FeedIndexer.jar which is in the same directory as the .sh, I would like to run it using crontab, so I did this: # /etc/crontab: system-wide crontab # Unlike any other crontab you don't have to run the `crontab' # command to install the new version when you edit this file # and files in /etc/cron.d. These files also have username fields, # that none of the other crontabs do. SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) 01 01 * * * root run-parts --report /home/slosada/workspace/FeedIndexer/target/FeedIndexer.sh # But I don't know how to run it. Have i made any mistake?? Thank you!

    Read the article

  • Issues with "There is already an object named 'xxx' in the database'

    - by Hoser
    I'm fairly new to SQL so this may be an easy mistake, but I haven't been able to find a solid solution anywhere else. Problem is whenever I try to use my temp table, it tells me it cannot be used because there is already an object with that name. I frequently try switching up the names, and sometimes it'll let me work with the table for a little while, but it never lasts for long. Am I dropping the table incorrectly? Also, I've had people suggest to just use a permanent table, but this database does not allow me to do that. create table #RandomTableName(NameOfObject varchar(50), NameOfCounter varchar(50), SampledValue decimal) select vPerformanceRule.ObjectName, vPerformanceRule.CounterName, Perf.vPerfRaw.SampleValue into #RandomTableName from vPerformanceRule, vPerformanceRuleInstance, Perf.vPerfRaw where (ObjectName like 'Processor' AND CounterName like '% Processor Time') OR(ObjectName like 'System' AND CounterName like 'Processor Queue Length') OR(ObjectName like 'Memory' AND CounterName like 'Pages/Sec') OR(ObjectName like 'Physical Disk' AND CounterName like 'Avg. Disk Queue Length') OR(ObjectName like 'Physical Disk' AND CounterName like 'Avg. Disk sec/Read') OR(ObjectName like 'Physical Disk' and CounterName like '% Disk Time') OR(ObjectName like 'Logical Disk' and CounterName like '% Free Space' AND SampleValue > 70 AND SampleValue < 100) order by ObjectName, SampleValue drop table #RandomTableName

    Read the article

  • Microsoft Word 2008 on the Mac sometimes "Disappears" documents, really.

    - by Ross Charette
    This happens in a computer lab environment, has happened at least 3 times. We are running Microsoft Office 2008 for mac on Leopard, everything is updated. Our user's home directories are on a network drive, but the /Library/Cache folder is running locally. Typically a student will have a Word file that they have been working on, it's been saved before they even logged onto the computer that day. They log on, open the document, click the save icon (not go to File Save), sometimes even save multiple times, then close Word. The document is now gone. It's not hidden, there are no autosaves or anything in the Cache folder. Definitely not in the trash or trashes folder. It can't find it when you click on it in 'recent documents'. Searching meticulously though every folder in their home drive turns up nothing. They look using Finder, I look ssh'd as root into their home using ls -la. I look for similar files in case they renamed it by mistake. It's gone. Disappeared. Vaporized. It's happened to at least 3 different users in the past year. Much whining. Any idea?

    Read the article

  • How can I undo what I did when I accidentally booted linux host inside itself with VMware?

    - by ThomasGHenry
    Hello, I'm dual booting XP and Kubuntu. I wanted to boot to my existing raw scsi XP partition inside Kubuntu, not a virtual XP instance. I accidentally booted Kubuntu inside itself. I know this is a big mistake, so I interrupted the VM, which saved the state and closed. I rebooted the host and now I can't load the Kubuntu partition at boot time. I get a maintenance shell and the Kubuntu partition is read-only. I am able to boot XP as usual. I removed the HDD and tried to mount it on another computer as an external drive and neither partition (XP or Kubuntu) will be recognized, it just appears to be one device that still mounts and appears empty. From the maintenance shell I can see all the files are still on the Kubuntu partition. How can I undo what I did when I accidentally booted Kubuntu inside itself? Is it a matter of unlocking some files somewhere? how can I do that on a RO filesystem? Thanks!

    Read the article

  • oracle access on vmware fusion

    - by gaudi_br
    Hello, I'm running snow leopard and I'm doing some development that requires some network knowledge. I've installed vmware fusion 3.0 and I've set up a virtual machine with windows 2003 server. I need to mimic the exact configuration of another server in the network, so I really need to run the versions I'll be mentioning here. Besides, I set up two network configurations on the VM: one NAT config (so that I can have internet access) and one host-only config (because I need to use another server's mac adress and my local area network might have a problem with it) From the installation of windows 2003, I then installed oracle 10.2.0.1. During the installation I received a warning about the primary ip-address of the system being dhcp assigned, but I ignored it (maybe it was a mistake)... Now, from experience, unless the DHCP assigned address changes, I should be able to access the guest system's database from the host system, so I went to safari and tried to access the oracle em. As it turns out, because my computer is on a company network, the company's DNS doesn't know about the virtual machine, unless of course I switch to a bridged network config. However, I don't want to do that because I don't to mix up the domains. So I guess the question is, how can I define my own dns or router, or whatever it is that I need to define so that whenever I try the guest system's ip address form the host, it will use the vmnet1 or vmnet8 interface define by vmware and bypass the dns configuration of my local area network. I'd also like to know what to do incase I want to change ip addresses on the guest machine without having oracle go haywire (I've noticed a few folders on the structure which are specific for the very first IP Address)... Any help would be appreciated. Thanks in advance.

    Read the article

  • What is the best time to set the IP address for a server headed to a server colocation facility?

    - by jim_m_somewhere
    What is the best time to set the IP address for a server? I have a server that I am going to install the OS on and then I am going to send it to a server colocation facility. The server is going to have Internet facing services (www, email, etc.) I can set up a "fake" IP address during install (by fake I mean private as in RFC 1918) and change the "fake" IPs to the real IPs once I set up the colocation service. The other option is to set up the colocation service...wait for them to give me the "real" IPs and use them during the OS install. The ramification are that...if I use "fake" IPs during install...I will have to wait before I set up things like SSL certs. If I wait for IPs from the colocation provider...then I can set up SSL certs that use the "correct" (as in "real") IP addresses...no changes to the certs until they expire. Do the "gotchas" of changing an IP address on a server outweigh the benefits of a quick install? The other danger with using "fake" IPs is that I could make a mistake when I go through the various files to change the IP address to the "live" IP address. Server OS: CentOS 6.2 or CentOS 6.3, 64 bit. Apps: Apache 2.4.X httpd, MySQL 5.X (will eventually use replication)

    Read the article

  • C++ class is not recognizing string data type

    - by reallythecrash
    I'm working on a program from my C++ textbook, and this this the first time I've really run into trouble. I just can't seem to see what is wrong here. Visual Studio is telling me Error: identifier "string" is undefined. I separated the program into three files. A header file for the class specification, a .cpp file for the class implementation and the main program file. These are the instructions from my book: Write a class named Car that has the following member variables: year. An int that holds the car's model year. make. A string that holds the make of the car. speed. An int that holds the car's current speed. In addition, the class should have the following member functions. Constructor. The constructor should accept the car's year and make as arguments and assign these values to the object's year and make member variables. The constructor should initialize the speed member variable to 0. Accessors. Appropriate accessor functions should be created to allow values to be retrieved from an object's year, make and speed member variables. There are more instructions, but they are not necessary to get this part to work. Here is my source code: // File Car.h -- Car class specification file #ifndef CAR_H #define CAR_H class Car { private: int year; string make; int speed; public: Car(int, string); int getYear(); string getMake(); int getSpeed(); }; #endif // File Car.cpp -- Car class function implementation file #include "Car.h" // Default Constructor Car::Car(int inputYear, string inputMake) { year = inputYear; make = inputMake; speed = 0; } // Accessors int Car::getYear() { return year; } string Car::getMake() { return make; } int Car::getSpeed() { return speed; } // Main program #include <iostream> #include <string> #include "Car.h" using namespace std; int main() { } I haven't written anything in the main program yet, because I can't get the class to compile. I've only linked the header file to the main program. Thanks in advance to all who take the time to investigate this problem for me.

    Read the article

  • Need advice about pointers and time elapsed program. How to fix invalid operands and cannot convert errors?

    - by user1781382
    I am trying to write a program that tells the difference between the two times the user inputs. I am not sure how to go about this. I get the errors : Line 27|error: invalid operands of types 'int' and 'const MyTime*' to binary 'operator-'| Line |39|error: cannot convert 'MyTime' to 'const MyTime*' for argument '1' to 'int DetermineElapsedTime(const MyTime*, const MyTime*)'| I also need a lot of help in this problem. I don't have a good curriculum, and my class textbook is like cliffnotes for programming. This will be my last class at this university. The C++ teztbook I use(my own not for class) is Sam's C++ One hour a day. #include <iostream> #include<cstdlib> #include<cstring> using namespace std; struct MyTime { int hours, minutes, seconds; }; int DetermineElapsedTime(const MyTime *t1, const MyTime *t2); long t1, t2; int DetermineElapsedTime(const MyTime *t1, const MyTime *t2) { return((int)t2-t1); } int main(void) { char delim1, delim2; MyTime tm, tm2; cout << "Input two formats for the time. Separate each with a space. Ex: hr:min:sec\n"; cin >> tm.hours >> delim1 >> tm.minutes >> delim2 >> tm.seconds; cin >> tm2.hours >> delim1 >> tm2.minutes >> delim2 >> tm2.seconds; DetermineElapsedTime(tm, tm2); return 0; } I have to fix the errors first. Anyone have any ideas??

    Read the article

  • How to Identify Which Hardware Component is Failing in Your Computer

    - by Chris Hoffman
    Concluding that your computer has a hardware problem is just the first step. If you’re dealing with a hardware issue and not a software issue, the next step is determining what hardware problem you’re actually dealing with. If you purchased a laptop or pre-built desktop PC and it’s still under warranty, you don’t need to care about this. Have the manufacturer fix the PC for you — figuring it out is their problem. If you’ve built your own PC or you want to fix a computer that’s out of warranty, this is something you’ll need to do on your own. Blue Screen 101: Search for the Error Message This may seem like obvious advice, but searching for information about a blue screen’s error message can help immensely. Most blue screens of death you’ll encounter on modern versions of Windows will likely be caused by hardware failures. The blue screen of death often displays information about the driver that crashed or the type of error it encountered. For example, let’s say you encounter a blue screen that identified “NV4_disp.dll” as the driver that caused the blue screen. A quick Google search will reveal that this is the driver for NVIDIA graphics cards, so you now have somewhere to start. It’s possible that your graphics card is failing if you encounter such an error message. Check Hard Drive SMART Status Hard drives have a built in S.M.A.R.T. (Self-Monitoring, Analysis, and Reporting Technology) feature. The idea is that the hard drive monitors itself and will notice if it starts to fail, providing you with some advance notice before the drive fails completely. This isn’t perfect, so your hard drive may fail even if SMART says everything is okay. If you see any sort of “SMART error” message, your hard drive is failing. You can use SMART analysis tools to view the SMART health status information your hard drives are reporting. Test Your RAM RAM failure can result in a variety of problems. If the computer writes data to RAM and the RAM returns different data because it’s malfunctioning, you may see application crashes, blue screens, and file system corruption. To test your memory and see if it’s working properly, use Windows’ built-in Memory Diagnostic tool. The Memory Diagnostic tool will write data to every sector of your RAM and read it back afterwards, ensuring that all your RAM is working properly. Check Heat Levels How hot is is inside your computer? Overheating can rsult in blue screens, crashes, and abrupt shut downs. Your computer may be overheating because you’re in a very hot location, it’s ventilated poorly, a fan has stopped inside your computer, or it’s full of dust. Your computer monitors its own internal temperatures and you can access this information. It’s generally available in your computer’s BIOS, but you can also view it with system information utilities such as SpeedFan or Speccy. Check your computer’s recommended temperature level and ensure it’s within the appropriate range. If your computer is overheating, you may see problems only when you’re doing something demanding, such as playing a game that stresses your CPU and graphics card. Be sure to keep an eye on how hot your computer gets when it performs these demanding tasks, not only when it’s idle. Stress Test Your CPU You can use a utility like Prime95 to stress test your CPU. Such a utility will fore your computer’s CPU to perform calculations without allowing it to rest, working it hard and generating heat. If your CPU is becoming too hot, you’ll start to see errors or system crashes. Overclockers use Prime95 to stress test their overclock settings — if Prime95 experiences errors, they throttle back on their overclocks to ensure the CPU runs cooler and more stable. It’s a good way to check if your CPU is stable under load. Stress Test Your Graphics Card Your graphics card can also be stress tested. For example, if your graphics driver crashes while playing games, the games themselves crash, or you see odd graphical corruption, you can run a graphics benchmark utility like 3DMark. The benchmark will stress your graphics card and, if it’s overheating or failing under load, you’ll see graphical problems, crashes, or blue screens while running the benchmark. If the benchmark seems to work fine but you have issues playing a certain game, it may just be a problem with that game. Swap it Out Not every hardware problem is easy to diagnose. If you have a bad motherboard or power supply, their problems may only manifest through occasional odd issues with other components. It’s hard to tell if these components are causing problems unless you replace them completely. Ultimately, the best way to determine whether a component is faulty is to swap it out. For example, if you think your graphics card may be causing your computer to blue screen, pull the graphics card out of your computer and swap in a new graphics card. If everything is working well, it’s likely that your previous graphics card was bad. This isn’t easy for people who don’t have boxes of components sitting around, but it’s the ideal way to troubleshoot. Troubleshooting is all about trial and error, and swapping components out allows you to pin down which component is actually causing the problem through a process of elimination. This isn’t a complete guide to everything that could likely go wrong and how to identify it — someone could write a full textbook on identifying failing components and still not cover everything. But the tips above should give you some places to start dealing with the more common problems. Image Credit: Justin Marty on Flickr     

    Read the article

  • CRM 2011 - Workflows Vs JavaScripts

    - by Kanini
    In the Contact entity, I have the following attributes Preferred email - A read only field of type Email Personal email 1 - An email field Personal email 2 - An email field Work email 1 - An email field Work email 2 - An email field School email - An email field Other email - An email field Preferred email option - An option set with the following values {Personal email 1, Personal email 2, Work email 1, Work email 2, School email and Other email). None of the above mentioned fields are required. Requirement When user picks a value from Preferred email option, we copy the email address available in that field and apply the same in the Preferred email field. Implementation The Solution Architect suggested that we implement the above requirement as a Workflow. The reason he provided was - most of the times, these values are to be populated by an external website and the data is then fed into CRM 2011 system. So, when they update Preferred email option via a Web Service call to CRM, the WF will run and updated the Preferred email field. My argument / solution What will happen if I do not pick a value from the Preferred email Option Set? Do I set it to any of the email addresses that has a value in it? If so, what if there is more than one of the email address fields are populated, i.e., what if Personal email 1 and Work email 1 is populated but no value is picked in the Option Set? What if a value existed in the Preferred email Option Set and I then change it to NULL? Should the field Preferred email (where the text value of email address is stored) be set to Read Only? If not, what if I have picked Personal email 1 in the Option Set and then edit the Preferred email address text field with a completely new email address If yes, then we are enforcing that the preferred email should be one among Personal email 1, Personal email 2, Work email 1, Work email 2, School email or Other email [My preference would be this] What if I had a value of [email protected] in the personal email 1 field and personal email 2 is empty and choose value of Personal email 1 in the drop down for Preferred email (this will set the Preferred email field to [email protected]) and later, I change the value to Personal email 2 in the Preferred email. It overwrites a valid email address with nothing. I agree that it would be highly unlikely that a user will pick Preferred email as Personal email 2 and not have a value in it but nevertheless it is a possible scenario, isn’t it? What if users typed in a value in Personal email 1 but by mistake picked Personal email 2 in the option set and Personal email 2 field had no value in it. Solution The field Preferred email option should be a required field A JS should run whenever Preferred email option is changed. That JS function should set the relevant email field as required (based on the option chosen) and another JS function should be called (see step 3). A JS function should update the value of Preferred email with the value in the email field (as picked in the option set). The JS function should also be run every time someone updates the actual email field which is chosen in the option set. The guys who are managing the external website should update the Preferred email field - surely, if they can update Preferred email option via a Web Service call, it is easy enough to update the Preferred email right? Question Which is a better method? Should it be written as a JS or a WorkFlow? Also, whose responsibility is it to update the Preferred email field when the data flows from an external website? I am new to CRM 2011 but have around 6 years of experience as a CRM consultant (with other products). I do not come from a development background as I started off as a Application Support Engineer but have picked up development in the last couple of years.

    Read the article

  • SQL SERVER – What is SSRS and Why SSRS is asked for in many Job Opening?

    - by Pinal Dave
    This example is from the Beginning SSRS by Kathi Kellenberger. Supporting files are available with a free download from the www.Joes2Pros.com web site. This will be a 5 day blog post in getting started with SSRS. Today will show the importance of SSRS in the business. Why is SSRS asked for in so many job openings? If you talk to an SSRS expert it’s very clear to them exactly why companies really need this invention and how it saves time and adds business value. You don’t have to be an SSRS expert to know its value or to start using it. For example you don’t have to be an airline pilot to know the usefulness of modern transportation. Even the people who don’t know how to run SSRS but need the reports can tell you why that is needed. This blog post will go into why SSRS is an important invention by showing how it improves the usage of information in your company. Before SSRS there has always been a need for a company to benefit from the use of its own information. Excel spreadsheets have been a popular way to do this for a long time. With SSRS you can still use this solution and gain many other options too. A friend of mine told me a story about doing database work in the 90s for a major company and how he wished SSRS was available back then. The Vice President of the marketing channel would often come to him just before an important meeting with the board of directors. He often needed to show how certain product sales were performing over time. All this information was in the database so it was my friend’s job to get the information out and organized into a medium the VP could use. This medium was usually Excel. The VP often had meetings all over the world where he showcased this Excel report. The solution to get the VP to him anywhere he was in the world was an Excel file attached to an e-mail. This worked pretty well but with some drawbacks. One time my friend sent the wrong file in the e-mail. A few minutes later my friend realized his mistake and sent another frantic e-mail to VP. This one was saying to ignore the last e-mail and use this newer one. Would the VP see the correct e-mail in time? If SSRS had been available, my friend could have created a solution that let the VP run the report any time he wished. The report could have been published to the company intranet where the VP could run it from any of the offices he happened to be traveling to that month. There is a fair amount of work up front to develop and publish the report, but once that work is completed, the report can be reused as many times as needed. My friend could even be on vacation for the first day of the monthly and the VP can get his real-time report. Not only could the report show the most recent data, the VP could choose to view reports of previous months with just a few clicks. The deployed SSRS is user friendly, and can also be configured to protect reports from being run by the wrong people. Tomorrow’s Post Tomorrow’s blog post will show how to know if you already have SSRS installed. If you want to learn SSRS in easy to simple words – I strongly recommend you to get Beginning SSRS book from Joes 2 Pros. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Reporting Services, SSRS

    Read the article

  • NServiceBus Generic Host and mqsvc.exe high CPU

    - by Michael Stephenson
    We have been doing some work with NServiceBus recently and observed some unusual behaviour which was caused by our mistake and seemed worthy of a small post.   The Scenario In our solution we were doing some standard NServiceBus stuff by pushing a message to a queue using NServiceBus.  We had a direct send/receive scenario rather than a publish/subscribe one.   The background process which was meant to collect the message and then process it was a normal NServiceBus message handler.  We would run the NServiceBus.Host.exe which would find the handler and then do the usual NServiceBus magic.   The Problem In this solution we were creating some automated tests around this module of the integration process to ensure that it would work well.  We had two tests.   Test 1 This test would start NServiceBus.Host.exe using the Process object, then seed a message to the queue via our web service façade sitting above the queue which wrapped NServiceBus.  The background process would then process the message and the test would check the message had been processed fine.   If all was well then the NServiceBus.Host.exe process was stopped.   Test 2 In test 2 we would do a very similar thing except that instead of starting the process the test would install NServiceBus.Host.exe as a windows service and then start the service before the test and once the test was executed it would stop the test.   The Results of the Tests Test 1 worked really well, however in test 2 we found that it didn’t really work at all, instead of doing the background process we were finding that between mqsvc.exe and NServiceBus.Host.exe the CPU on the machine was maxed and nothing was really happening.   The Solution After trying a few things we found it was the permissions on the queue were not set correctly.  Once this was resolved it all worked fine and CPU was not excessive and ran just like the console application.   I think the couple of take aways from this are:   Make sure you set the windows service for NserviceBus Generic Host to the right credentials When you install the generic host as a windows service then by default it will use the default windows credentials.  For any production like scenario you should be using a domain account to run the process as via the windows service. Make sure you have the queue set with the right permissions For the credentials you have used to configure the generic host as a windows service you should ensure that this user has the appropriate permissions for any queues it will interact with. Make sure you turn on the right logging configuration in NServiceBus When this wasnt working correctly we didnt know there was an issue, we were just experiencing the high CPU condition.  I am a little surprised that there wasnt something logged and that the process didnt crash.  I guess this could be by design bearing in mind that the process could be monitoring many queues.  In this point Im just saying that originally we didnt have all of the log4net logging which is available from NServiceBus turned on.  Its probably a good idea to have this turned on and configured until you are happy your solution is working fine.   Thanks to Ahmed Hashmi on my team who got this working in the end.

    Read the article

  • The dislikes of TDD

    - by andrewstopford
    I enjoy debates about TDD and Brian Harrys blog post is no exception. Brian sounds out what he likes and dislikes about TDD and it's the dislikes I'll focus on. The idea of having unit tests that cover virtually every line of code that I’ve written that I have to refactor every time I refactor my code makes me shudder.  Doing this way makes me take nearly twice as long as it would otherwise take and I don’t feel like I get sufficient benefits from it. Refactoring your tests to match your refactored code sounds like the tests are suffering. Too many hard dependencies with no SOLID concerns are a sure fire reason you would do this. Maybe at the start of a TDD cycle you would need to do this as your design evolves and you remove these dependencies but this should quickly be resolved as you refactor. If you find your self still doing it then stop and look back at your design. Don’t get me wrong, I’m a big fan of unit tests.  I just prefer to write them after the code has stopped shaking a bit.  In fact most of my early testing is “manual”.  Either I write a small UI on top of my service that allows me to plug in values and try it or write some quick API tests that I throw away as soon as I have validated them. The problem with this is that a UI can make assumptions on your code that then just unit test around and very quickly the design becomes bad and you technical debt sweeps in. If you want to blackbox test your code with a UI then do so after your TDD cycles not before. This is probably by biggest issue with a literal TDD interpretation.  TDD says you never write a line of code without a failing test to show you need it.  I find it leads developers down a dangerous path.  Without any help from a methodology, I have met way too many developers in my life that “back into a solution”.  By this, I mean they write something, it mostly works and they discover a new requirement so they tack it on, and another and another and when they are done, they’ve got a monstrosity of special cases each designed to handle one specific scenario.  There’s way more code than there should be and it’s way too complicated to understand. I believe in finding general solutions to problems from which all the special cases naturally derive rather than building a solution of special cases.  In my mind, to do this, you have to start by conceptualizing and coding the framework of the general algorithm.  For me, that’s a relatively monolithic exercise. TDD is an development pratice not a methodology, the danger is that the solution becomes a mass of different things that violate DRY. TDD won't solve these problems, only good communication and practices like pairing will help. Above all else an assumption that TDD replaces a methodology is a mistake, combine it with what ever works for your team\business but only good communication will help. A good naming scheme\structure for folders, files and tests can help you and your team isolate what tests are for what.

    Read the article

  • Silverlight Cream for January 26, 2011 -- #1036

    - by Dave Campbell
    In this all-submittal Issue: XamlNinja, Kevin Dockx, Steve Wortham, Andrea Boschin, Mick Norman, Colin Eberhardt, and Rudi Grobler(-2-, -3-, -4-, -5-). Above the Fold: Silverlight: "Getting an invalid cross-thread exception in Silverlight?" Kevin Dockx WP7: "WP7 Contrib – the last messenger" XamlNinja ISO: "How many files are too many files for isolated storage?" Mick Norman Shoutouts: Telerik announced a free WP7 Webinars series that you probably don't want to miss: Join Us for the Special Free Windows Phone 7 Webinars Series. Guest lecturers - Shawn Wildermuth and Mark Arteaga From SilverlightCream.com: WP7 Contrib – the last messenger XamlNinja has a great post up extending Laurent's IMessenger to deal with a tricky issue of trying to fire a message from one VM to another even if the 2nd VM isn't alive yet... oh, and this is in WP7Contrib, so go grab it! Getting an invalid cross-thread exception in Silverlight? Kevin Dockx has a solution to a problem we've all had... the 'invalid cross-thread exception' ... and the solution is even for those of us trying to do this in a VM... cool and easy solution, Kevin! Mastering Storyboards One Mistake at a Time Steve Wortham is back with a tutorial with a great title :) ... check out the progression from one success to another in this picture/title viewer ... don't miss the very end where he has the control rolled up into a CaptionedImageHyperlink, and a link to download it! Windows Phone 7 - Part #2: Your First Application Andrea Boschin has part 2 of his SilverlightShow WP7 series up. Lots of good intro material here on the manifest file and app.xaml ... he even gets into the ApplicationBar, phone orientation, and the Metro theme. How many files are too many files for isolated storage? Mick Norman alerted me to his blog early this morning, and this is his latest post... interesting tests of how many files are too many for ISO on your WP7... and I have to admit... he's stuffing a boatload of them out there in these tests! ... great info Mick! and thanks for the links. A Navigator Control For Visiblox Time Series Charts Colin Eberhardt's latest post is about creating an interactive navigator for large time series datasets in Visiblox charts.... check the images at the top of the post, and it'll be obvious :) ... very cool stuff. MVVM Frameworks with WP7 support Rudi Grobler has been very busy and if you check the dates, these posts are all in a day or two! This first highlights two contenders for MVVM on WP7: Caliburn and MVVMLight... both well-supported... quick intro to each followed by good links out to the author's sites Reading barcodes from your WP7 device Rudi Grobler also has a cool post up on reading barcodes with your WP7... he's using the ZXing Barcode Scanning Library, and makes quick work of the job. Taking Sterling for a Test-Drive Rudi Grobler has a quick intro to Sterlink, Jeremy Likness' ISO database for Silverlight up... quickly taking care of writing and reading back data. SQLite on WP7 After his discussion of Sterling, Rudi Grobler is now demonstrating the use of SQLite that has been ported to WP7. Check out his demo code... looks pretty easy to use. Hacking the WP7 Camera (The basics) Rudi Grobler's latest post is on getting direct access to the camera on WP7... be sure to do all the downloads and check out the external links he has. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Your Job Search Should be More Than Just a New Year's Resolution

    - by david.talamelli
    I love the beginning of a new year, it is a great chance to refocus and either re-evaluate goals you are working to or even set new ones. I don't have any statistics to measure this but I am sure that one of the more popular new year's resolutions in the general workforce is to either get a new job or work to further develop one's career. I think this is a good idea, in today's competitive work force people should have a plan of what they want to do, what role they are after and how to get there. One common mistake I think many people make though is that a career plan shouldn't be a once a year thought. When people finish with the holiday season with their new year's resolution to find a new job fresh in their mind, you can see the enthusiasm and motivation a person has to make something happen. Emails are sent, calls are made, applications are made, networking is happening, etc..... Finding the right role that you are after however can be difficult, while it would be great if that dream role was available just at the time you happened to be looking for it - in reality this is not always the case. Job Seekers need to keep reminding themselves that while sometimes that dream job they are after is available at the same time they are looking, that also a Job search can be a difficult and long process. Many people who set out with the best of intentions in January to find a new job can soon lose interest in a job search if they do not immediately find a role. Just like the Christmas decorations are put away and the photos from New Year's are stored away - a Job Seeker's motivation may slowly decrease until that person finds themselves 12 months later in the same situation in same role and looking for that new opportunity again. Rather than just "going for it" and looking for a role in the month of January, a person's job search or career plan should be an ongoing activity and thought process that is constantly updated and evaluated over the course of the year. It can be hard to stay motivated over an extended period of time, especially when you are newly motivated and ready for that new role and the results are not immediate. Rather than letting your job search fall down the priority list and into the "too hard basket" a few ideas that may keep your enthusiasm fresh Update your resume every 6 months, even if you are not looking for a job - it is easy to forget what you have accomplished if you don't keep your details updated. Also it is good to be prepared and have a resume ready to go in case you do get an unexpected phone call for that 'dream job' you have been hoping for. Work out what you want out of your next role before you begin your job search - rather than aimlessly searching job ads or talking to people - think of the organisations or type of role you would like before you search. If you know what you are looking for it will be much easier to work out how to get there than if you do not know what you want. Don't expect immediate results once you decide to look for another job, things don't always fall into place. Timing and delivery can be important pieces of being selected for a role, companies don't hire every role in January. Have an open mind - people you meet or talk to may not result in immediate results for your job search but every connection may help you get a bit closer to what you are after . These actions will not guarantee a positive result, but in today's competitive work force every little of extra preparation and planning helps. All the best for 2011 and I hope your career plan whatever it may be is a success.

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >