Search Results

Search found 15914 results on 637 pages for 'physical security'.

Page 607/637 | < Previous Page | 603 604 605 606 607 608 609 610 611 612 613 614  | Next Page >

  • Browsers (IE and Firefox) freeze when copying large amount of text

    - by Matt
    I have a web application - a Java servlet - that delivers data to users in the form of a text printout in a browser (text marked up with HTML in order to display in the browser as we want it to). The text does display in different colors, though most of it is black. One typical mode of operation is this: 1. User submits a form to request data. 2. Servlet delivers HTML file to browser. 3. User does CTRL+A to select all the text. 4. User does CTRL+C to copy all the text. 5. User goes to a text editor and does CTRL+V to paste the text. In the testing where I'm having this problem, step #2 successfully loads all the data - we wait for that to complete. We can scroll down to the end of what the browser loaded and see the end of the data. However, the browser freezes on step #3 (Firefox) or on step #4 (IE). Because step #2 finishes, I think it is a browser/memory issue, and not an issue with the web application. If I run queries to deliver smaller amounts of data (but after several queries we get the same data we would have above in one query) and copy/paste this text, the file I save it into ends up being about 8 MB. If I save the browser's displayed HTML to a file on my computer via File-Save As from the browser menu, it works fine and the file is about 22 MB. We've tried this on 2 different computers at work (both running Windows XP, with at least 2 GB of RAM and many GB of free disk space), using Firefox and IE. We also tried it on a home computer from a home network outside of work (thinking it might be our IT security software causing the problem), running Windows 7 using IE, and still had the problem. When I've done this, I can see whatever browser I'm using utilizing the CPU at 50%. Firefox's memory usage grows to about 1 GB; IE's stays in the several hundred MBs. We once let this run for half an hour, and it did not complete. I'm most likely going to modify the web app to have an option of delivering a plain text file for download, and I imagine that will get the users what they need. But for the mean time, and because I'm curious - and I don't like my application freezing people's browsers, does anyone have any ideas about the browser freezing? I understand that sometimes you just reach your memory limit, but 22 MB sounds to me like an amount I should be able to copy to the clipboard.

    Read the article

  • This Android SDK requires Android Developer Toolkit version 22.0.0 or above. Current version is 21.x.x.

    - by user2626673
    Hi i have a problem with my Eclipse and the SDK (i have download and install the latest ADT Bundle for windows ) when i start my eclipse i get this problem : This Android SDK requires Android Developer Toolkit version 22.0.0 or above. Current version is 20.0.0. please update your SDK tools to the latest version i have tried the option : Help - check for updates But with no new update find then i try this one : How to Update your ADT to Latest Version In Eclipse go to Help Install New Software ---> Add inside Add Repository write the Name: ADT (or whatever you want) and Location: https://dl-ssl.google.com/android/eclipse/ after loading you should get Developer Tools and NDK Plugins check both if you want to use the Native Developer Kit (NDK) in the future or check Developer Tool only click Next Finish But i dont have the option to click next to finish (the back , next and finish options are grey ) Then i try this method : Go here download latest version of ADT-22.0.4.zip (*) At Eclipse > Help > Install new software... > Uncheck Contact all update sites during install to find required software (last bottom preference) that will avoid any unwanted delays during install. then at the same screen (top) Click Add > Archive > select downloaded ADT-X.X.X.zip > follow on screen installation steps But had the same problem when it was to finish the installation.. no option to click ''next'' then i try this one : Help – Install New Software in the ADT menu. Type https://dl-ssl.google.com/android/eclipse/site.xml in “Work with:” and Enter. You can see the “Developer Tools” item. Select it and click Next. Click Next one more. Click Finish accepting the terms of the license agreements. Click OK in the “Security Warning” window. Let the installer restart ADT after installing the tools. But and in this option have the same problem as above.. can click the ''next'' to finish http://i30.photobucket.com/albums/c316/caslor_1978/diafora/atdproblem_zps0d141b7b.jpg i check my version and it is the latest but have the problem http://i30.photobucket.com/albums/c316/caslor_1978/diafora/atdproblem2_zps81de6317.jpg How can i fix this problem ? any suggestion? Win7 / 32bit / java SE Development kit7 update 25

    Read the article

  • Determining if Memory Pointer is Valid - C++

    - by Jim Fell
    It has been my observation that if free( ptr ) is called where ptr is not a valid pointer to system-allocated memory, an access violation occurs. Let's say that I call free like this: LPVOID ptr = (LPVOID)0x12345678; free( ptr ); This will most definitely cause an access violation. Is there a way to test that the memory location pointed to by ptr is valid system-allocated memory? It seems to me that the the memory management part of the Windows OS kernel must know what memory has been allocated and what memory remains for allocation. Otherwise, how could it know if enough memory remains to satisfy a given request? (rhetorical) That said, it seems reasonable to conclude that there must be a function (or set of functions) that would allow a user to determine if a pointer is valid system-allocated memory. Perhaps Microsoft has not made these functions public. If Microsoft has not provided such an API, I can only presume that it was for an intentional and specific reason. Would providing such a hook into the system prose a significant threat to system security? Situation Report Although knowing whether a memory pointer is valid could be useful in many scenarios, this is my particular situation: I am writing a driver for a new piece of hardware that is to replace an existing piece of hardware that connects to the PC via USB. My mandate is to write the new driver such that calls to the existing API for the current driver will continue to work in the PC applications in which it is used. Thus the only required changes to existing applications is to load the appropriate driver DLL(s) at startup. The problem here is that the existing driver uses a callback to send received serial messages to the application; a pointer to allocated memory containing the message is passed from the driver to the application via the callback. It is then the responsibility of the application to call another driver API to free the memory by passing back the same pointer from the application to the driver. In this scenario the second API has no way to determine if the application has actually passed back a pointer to valid memory.

    Read the article

  • Unexpected return value

    - by Nicholas Gibson
    Program stopped compiling at this point: What is causing this error? (Error is at the bottom of post) public class JFrameWithPanel extends JFrame implements ActionListener, ItemListener { int packageIndex; double price; double[] prices = {49.99, 39.99, 34.99, 99.99}; DecimalFormat money = new DecimalFormat("$0.00"); JLabel priceLabel = new JLabel("Total Price: "+price); JButton button = new JButton("Check Price"); JComboBox packageChoice = new JComboBox(); JPanel pane = new JPanel(); TextField text = new TextField(5); JButton accept = new JButton("Accept"); JButton decline = new JButton("Decline"); JCheckBox serviceTerms = new JCheckBox("I Agree to the Terms of Service.", false); JTextArea termsOfService = new JTextArea("This is a text area", 5, 10); public JFrameWithPanel() { super("JFrame with Panel"); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); pane.add(packageChoice); setContentPane(pane); setSize(250,250); setVisible(true); packageChoice.addItem("A+ Certification"); packageChoice.addItem("Network+ Certification "); packageChoice.addItem("Security+ Certifictation"); packageChoice.addItem("CIT Full Test Package"); pane.add(button); button.addActionListener(this); pane.add(text); text.setEditable(false); text.setBackground(Color.WHITE); text.addActionListener(this); pane.add(termsOfService); termsOfService.setEditable(false); termsOfService.setBackground(Color.lightGray); pane.add(serviceTerms); serviceTerms.addItemListener(this); pane.add(accept); accept.addActionListener(this); pane.add(decline); decline.addActionListener(this); } public void actionPerformed(ActionEvent e) { packageIndex = packageChoice.getSelectedIndex(); price = prices[packageIndex]; text.setText("$"+price); Object source = e.getSource(); if(source == accept) { if(serviceTerms.isSelected() = false) // line 79 { JOptionPane.showMessageDialog(null,"Please accept the terms of service."); } else { JOptionPane.showMessageDialog(null,"Thanks."); } } } Error: \Desktop\Java Programming\JFrameWithPanel.java:79: unexpected type required: variable found : value if(serviceTerms.isSelected() = false) ^ 1 error

    Read the article

  • Message passing chrome extension

    - by Mayur Kataria
    I wants to create an extension where content script will send message to background page and then on browser action means clicking on extension icon will access that background page and get some data.I am using chrome Version 23.0.1271.64 m on windows8. I am getting following error. Port error: Could not establish connection. Receiving end does not exist. I tried to solve the same. but people are using sendRequest which is not supported by chrome20+. i also found solution mentioned for chrome 20+. But not working. Please help. Below is the file contents. manifest.json { "name": "Test Extension", "version": "1.0", "manifest_version": 2, "description": "A test extension.", "background": "background.html", "content_scripts": [ { "matches": ["<all_urls>"], "js": ["jquery.js","content.js"] } ], "permissions": ["tabs", "http://*/", "https://*/"], "browser_action": { "default_icon": "icon.png", "default_popup": "popup.html" } } background.html <html> <head> <script src="background.js"></script> </head> <body> <h1>Wy</h1> </body> </html> background.js chrome.extension.onMessage.addListener(function(request, sender, sendResponse) { // Chrome 20+ alert(request); console.log('received in listener'); sendResponse({farewell: "goodbye"}); }); content.js $(function(){ console.log('start-sending message'); chrome.extension.sendMessage({greeting: "hello"},function(response){alert(response);}); console.log('end-sending message'); }); popup.html <!doctype html> <html> <head> <title>Getting Started Extension's Popup</title> </style> <!-- JavaScript and HTML must be in separate files for security. --> <script src="jquery.js"></script> <script src="popup.js"></script> </head> <body> </body> </html> popup.js $(function(){ var str_html = "<tr><td width='60%'>S</td><td width='40%'>15</td></tr><tr><td width='60%'>M</td><td width='40%'>25</td></tr>"; $('#sizes_container').html(str_html); var bkg = chrome.extension.getBackgroundPage(); console.log(bkg); });

    Read the article

  • PHP Shared Sessions across Domain

    - by bigstylee
    Hi, I have seen a few answers to this on SOO but most of these are concerned with the use of subdomains, of which none have worked for me. The common one being that the use of session.cookie_domain, which from my understanding will only work with subdomains. I am interested in a solution that deals with deals with entirely different domains (and includes the possibility of subdomains). Unfortunately project deadlines being what they are, time is not on my side, so I turn to SOO's expertise and experience. The current project brief is to be able to log into one site which currently only stores the user_id in the session and then be able to retrieve this value while on a different domain within the same server enviroment. Session data is being stored/retrieved from a database where the session id is the primary key. I am hoping to find a "light wieght" and "easy" to implement solution. The system is utlising an in-house Model View Controller design pattern, so all requests (including different domains) are run through a single bootstrap script. Using the domain name as a variable, this determines what context to display to the user. One option that did look like to have potential is the use of a hidden image and using the alt tag to set the user id. My first impressions suggest this immediately seems "too easy" (if possible) and riddled with security flaws. Disscuss? Another option which I considered is using the IP and User Agent for authentication but again I feel this not going to be a reliable option due to shared networks and changing IP addresses. My third option (and preferred) which I considered and as yet not seen discussed is using htaccess to fool the user into thinking that they are on a different domain when infact apache is redirecting; something like www.foo.com/index.php?domain=bar.com&controller=news/categoires/1 but displays to the user as www.bar.com/news/categories/1 foo.com represents the "main site domain" which all requests are run through and bar.com is what the user thinks they are accessing. The controller request dictates the page and view being requested. Is this possible? Are there other options? Pros/Cons? Thanks in advanced!!!

    Read the article

  • COM port read - Thread remains alive after timeout occurs

    - by Sna
    Hello to all. I have a dll which includes a function called ReadPort that reads data from serial COM port, written in c/c++. This function is called within an extra thread from another WINAPI function using the _beginthreadex. When COM port has data to be read, the worker thread returns the data, ends normaly, the calling thread closes the worker's thread handle and the dll works fine. However, if ReadPort is called without data pending on the COM port, when timeout occurs then WaitForSingleObject returns WAIT_TIMEOUT but the worker thread never ends. As a result, virtual memory grows at about 1 MB every time, physical memory grows some KBs and the application that calls the dll becomes unstable. I also tryied to use TerminateThread() but i got the same results. I have to admit that although i have enough developing experience, i am not familiar with c/c++. I did a lot of research before posting but unfortunately i didn't manage to solve my problem. Does anyone have a clue on how could i solve this problem? However, I really want to stick to this kind of solution. Also, i want to mention that i think i can't use any global variables to use some kind of extra events, because each dll's functions may be called many times for every COM port. I post some parts of my code below: The Worker Thread: unsigned int __stdcall ReadPort(void* readstr){ DWORD dwError; int rres;DWORD dwCommModemStatus, dwBytesTransferred; int ret; char szBuff[64] = ""; ReadParams* params = (ReadParams*)readstr; ret = SetCommMask(params->param2, EV_RXCHAR | EV_CTS | EV_DSR | EV_RLSD | EV_RING); if (ret == 0) { _endthreadex(0); return -1; } ret = WaitCommEvent(params->param2, &dwCommModemStatus, 0); if (ret == 0) { _endthreadex(0); return -2; } ret = SetCommMask(params->param2, EV_RXCHAR | EV_CTS | EV_DSR | EV_RLSD| EV_RING); if (ret == 0) { _endthreadex(0); return -3; } if (dwCommModemStatus & EV_RXCHAR||dwCommModemStatus & EV_RLSD) { rres = ReadFile(params->param2, szBuff, 64, &dwBytesTransferred,NULL); if (rres == 0) { switch (dwError = GetLastError()) { case ERROR_HANDLE_EOF: _endthreadex(0); return -4; } _endthreadex(0); return -5; } else { strcpy(params->param1,szBuff); _endthreadex(0); return 0; } } else { _endthreadex(0); return 0; } _endthreadex(0); return 0;} The Calling Thread: int WINAPI StartReadThread(HANDLE porthandle, HWND windowhandle){ HANDLE hThread; unsigned threadID; ReadParams readstr; DWORD ret, ret2; readstr.param2 = porthandle; hThread = (HANDLE)_beginthreadex( NULL, 0, ReadPort, &readstr, 0, &threadID ); ret = WaitForSingleObject(hThread, 500); if (ret == WAIT_OBJECT_0) { CloseHandle(hThread); if (readstr.param1 != NULL) // Send message to GUI return 0; } else if (ret == WAIT_TIMEOUT) { ret2 = CloseHandle(hThread); return -1; } else { ret2 = CloseHandle(hThread); if (ret2 == 0) return -2; }} Thank you in advance, Sna.

    Read the article

  • Exception from Response.Redirect?

    - by allencoded
    I keep getting an error: A first chance exception of type 'System.Threading.ThreadAbortException' occurred in mscorlib.dll An exception of type 'System.Threading.ThreadAbortException' occurred in mscorlib.dll but was not handled in user code The thread '' (0x27ee4) has exited with code 0 (0x0). I was told it was related to this: protected void Button1_Click(object sender, EventArgs e) { Response.Redirect("Results.aspx?Keywords=" + searchString.Text); } I figured it may help to include my complete code. The code above is the only C# code on my first asp page. That code relates to this code on this page. It is also the only C# code I have on my second page. I am simply just trying to pass a keyword from a search form to this block of code: if (Request.QueryString["Keywords"] != null){ string keywords = Request.QueryString["Keywords"]; string myAppID = "HIDDEN"; var xml = XDocument.Load("http://svcs.ebay.com/services/search/FindingService/v1?OPERATION-NAME=findItemsByKeywords&SERVICE-VERSION=1.0.0&SECURITY-APPNAME=" + myAppID + "&RESPONSE-DATA-FORMAT=XML&REST-PAYLOAD&keywords=" + keywords + "&paginationInput.entriesPerPage=5"); XNamespace ns = "http://www.ebay.com/marketplace/search/v1/services"; var titles = from item in xml.Root.Descendants(ns + "title") select new{ title = xml.Descendants(ns + "title").Select (x => x.Value), }; foreach (var item in titles){ Label1.Text += item; } } This block of code calls the keyword value and uses it in an api to perform a search. The code of the xml(api) formats like this: <findItemsByKeywordsResponse xmlns="http://www.ebay.com/marketplace/search/v1/services"> <searchReslut count="5"> <item> <title></title> </item> <item> <title></title> </item> <item> <title></title> </item> Why am I getting this error how do you fix it?

    Read the article

  • GPO errors filling up event viewer

    - by burntehsky
    there have been a few issues with the server i have been working on i check the event viewer and it is filled with the errors below i was not sure how to go about fixing this i looked in the path where the file is and it is there Windows cannot access the file gpt.ini for GPO CN={31B2F340-016D-11D2-945F-00C04FB984F9},CN=Policies,CN=System,DC=ISPHOME,DC=NET. The file must be present at the location <\\isphome.net\\sysvol\ISPHOME.NET\Policies\{31B2F340-016D-11D2-945F-00C04FB984F9}\gpt.ini>. (The network location cannot be reached. For information about network troubleshooting, see Windows Help. ). Group Policy processing aborted. C:\Documents and Settings\Dimitri>ipconfig /all Windows IP Configuration Host Name . . . . . . . . . . . . : ispserver Primary Dns Suffix . . . . . . . : ISPHOME.NET Node Type . . . . . . . . . . . . : Unknown IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : ISPHOME.NET Ethernet adapter Local Area Connection 3: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Intel(R) PRO/100 VE Network Connection #2 Physical Address. . . . . . . . . : 00-07-E9-AA-3E-C3 DHCP Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : 192.168.1.50 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.1.1 DNS Servers . . . . . . . . . . . : 127.0.0.1 *dcdiag /c /v is below* Summary of test results for DNS servers used by the above domain contro llers: DNS server: 192.168.1.1 (<name unavailable>) All tests passed on this DNS server This is a valid DNS server DNS server: 192.168.1.50 (<name unavailable>) All tests passed on this DNS server This is a valid DNS server Name resolution is funtional. _ldap._tcp SRV record for the fores t root domain is registered Summary of DNS test results: Auth Basc Forw Del Dyn RReg Ext ________________________________________________________________ Domain: ISPHOME.NET ispserver PASS FAIL PASS PASS PASS PASS n/a ......................... ISPHOME.NET failed test DNS

    Read the article

  • Accessing a broken mdadm raid

    - by CarstenCarsten
    Hi! I used a western digital mybookworld (SOHO NAS storage using Linux) as backup for my Linux box. Suddenly, the mybookworld does not boot up any more. So I opened the box, removed the hard disk and put the hard disk into an external USB HDD case, and connected it to my Linux box. [ 530.640301] usb 2-1: new high speed USB device using ehci_hcd and address 3 [ 530.797630] scsi7 : usb-storage 2-1:1.0 [ 531.794844] scsi 7:0:0:0: Direct-Access WDC WD75 00AAKS-00RBA0 PQ: 0 ANSI: 2 [ 531.796490] sd 7:0:0:0: Attached scsi generic sg3 type 0 [ 531.797966] sd 7:0:0:0: [sdc] 1465149168 512-byte logical blocks: (750 GB/698 GiB) [ 531.800317] sd 7:0:0:0: [sdc] Write Protect is off [ 531.800327] sd 7:0:0:0: [sdc] Mode Sense: 38 00 00 00 [ 531.800333] sd 7:0:0:0: [sdc] Assuming drive cache: write through [ 531.803821] sd 7:0:0:0: [sdc] Assuming drive cache: write through [ 531.803836] sdc: sdc1 sdc2 sdc3 sdc4 [ 531.815831] sd 7:0:0:0: [sdc] Assuming drive cache: write through [ 531.815842] sd 7:0:0:0: [sdc] Attached SCSI disk The dmesg output looks normal, but I was wondering why the hardisk was not mounted at all. And why there are 4 different partitions on it. fdisk showed the following: root@ubuntu:/home/ubuntu# fdisk /dev/sdc WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u'). Command (m for help): p Disk /dev/sdc: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00007c00 Device Boot Start End Blocks Id System /dev/sdc1 4 369 2939895 fd Linux raid autodetect /dev/sdc2 370 382 104422+ fd Linux raid autodetect /dev/sdc3 383 505 987997+ fd Linux raid autodetect /dev/sdc4 506 91201 728515620 fd Linux raid autodetect Oh no! Everything seems to be created as a mdadm software raid. Calling mdadm --examine with the different partitions seems to affirm that. I think the only partition I am interested in, is /dev/sdc4 (because it is the largest). But nevertheless I called mdadm --examine with every partition. root@ubuntu:/home/ubuntu# mdadm --examine /dev/sdc1 /dev/sdc1: Magic : a92b4efc Version : 00.90.00 UUID : 5626a2d8:070ad992:ef1c8d24:cd8e13e4 Creation Time : Wed Feb 20 00:57:49 2002 Raid Level : raid1 Used Dev Size : 2939776 (2.80 GiB 3.01 GB) Array Size : 2939776 (2.80 GiB 3.01 GB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 1 Update Time : Sun Nov 21 11:05:27 2010 State : clean Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Checksum : 4c90bc55 - correct Events : 16682 Number Major Minor RaidDevice State this 0 8 1 0 active sync /dev/sda1 0 0 8 1 0 active sync /dev/sda1 1 1 0 0 1 faulty removed root@ubuntu:/home/ubuntu# mdadm --examine /dev/sdc2 /dev/sdc2: Magic : a92b4efc Version : 00.90.00 UUID : 9734b3ee:2d5af206:05fe3413:585f7f26 Creation Time : Wed Feb 20 00:57:54 2002 Raid Level : raid1 Used Dev Size : 104320 (101.89 MiB 106.82 MB) Array Size : 104320 (101.89 MiB 106.82 MB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 2 Update Time : Wed Oct 27 20:19:08 2010 State : clean Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Checksum : 55560b40 - correct Events : 9884 Number Major Minor RaidDevice State this 0 8 2 0 active sync /dev/sda2 0 0 8 2 0 active sync /dev/sda2 1 1 0 0 1 faulty removed root@ubuntu:/home/ubuntu# mdadm --examine /dev/sdc3 /dev/sdc3: Magic : a92b4efc Version : 00.90.00 UUID : 08f30b4f:91cca15d:2332bfef:48e67824 Creation Time : Wed Feb 20 00:57:54 2002 Raid Level : raid1 Used Dev Size : 987904 (964.91 MiB 1011.61 MB) Array Size : 987904 (964.91 MiB 1011.61 MB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 3 Update Time : Sun Nov 21 11:05:27 2010 State : clean Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Checksum : 39717874 - correct Events : 73678 Number Major Minor RaidDevice State this 0 8 3 0 active sync 0 0 8 3 0 active sync 1 1 0 0 1 faulty removed root@ubuntu:/home/ubuntu# mdadm --examine /dev/sdc4 /dev/sdc4: Magic : a92b4efc Version : 00.90.00 UUID : febb75ca:e9d1ce18:f14cc006:f759419a Creation Time : Wed Feb 20 00:57:55 2002 Raid Level : raid1 Used Dev Size : 728515520 (694.77 GiB 746.00 GB) Array Size : 728515520 (694.77 GiB 746.00 GB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 4 Update Time : Sun Nov 21 11:05:27 2010 State : clean Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Checksum : 2f36a392 - correct Events : 519320 Number Major Minor RaidDevice State this 0 8 4 0 active sync 0 0 8 4 0 active sync 1 1 0 0 1 faulty removed If I read the output correctly everything was removed, because it was faulty. Is there ANY way to see the contents of the largest partition? Or seeing somehow which files are broken? I see that everything is raid1 which is only mirroring, so this should be a normal partition. I am anxious to do anything with mdadm, in fear that I destroy the data on the hard disk. I would be very thankful for any help.

    Read the article

  • count specific things within a code in c++

    - by shap
    can anyone help me make this more generalised and more pro? #include <fstream> #include <iostream> #include <string> #include <vector> using namespace std; int main() { // open text file for input: string file_name; cout << "please enter file name: "; cin >> file_name; // associate the input file stream with a text file ifstream infile(file_name.c_str()); // error checking for a valid filename if ( !infile ) { cerr << "Unable to open file " << file_name << " -- quitting!\n"; return( -1 ); } else cout << "\n"; // some data structures to perform the function vector<string> lines_of_text; string textline; // read in text file, line by while (getline( infile, textline, '\n' )) { // add the new element to the vector lines_of_text.push_back( textline ); // print the 'back' vector element - see the STL documentation cout << lines_of_text.back() << "\n"; } cout<<"OUTPUT BEGINS HERE: "<<endl<<endl; cout<<"the total capacity of vector: lines_of_text is: "<<lines_of_text.capacity()<<endl; int PLOC = (lines_of_text.size()+1); int numbComments =0; int numbClasses =0; cout<<"\nThe total number of physical lines of code is: "<<PLOC<<endl; for (int i=0; i<(PLOC-1); i++) //reads through each part of the vector string line-by-line and triggers if the //it registers the "//" which will output a number lower than 100 (since no line is 100 char long and if the function does not //register that character within the string, it outputs a public status constant that is found in the class string and has a huge value //alot more than 100. { string temp(lines_of_text [i]); if (temp.find("//")<100) numbComments +=1; } cout<<"The total number of comment lines is: "<<numbComments<<endl; for (int j=0; j<(PLOC-1); j++) { string temp(lines_of_text [j]); if (temp.find("};")<100) numbClasses +=1; } cout<<"The total number of classes is: "<<numbClasses<<endl;

    Read the article

  • Type Mismatch using VBScript to create Pivot Table/Chart

    - by Rodricks
    I get Run time error:Type mismatch for the following code: Dim Field Field="Gen8" '''' ============================================================================== EXCEL Sheet '==============Errors -Stacked Chart by Year and Week --ALL WEEKS ''''=================================================== objExcel.ActiveWorkbook.Worksheets.Add SheetNumber = SheetNumber ' add adds in front so sheetnumber stays 1 objExcel.Sheets(SheetNumber).Select objExcel.Sheets(SheetNumber).Activate objExcel.Sheets(SheetNumber).Name = "YRWk" SheetName = "SYS_Product_YRWeeks" '============== strSQLCustomers = "select isnull(AB.Week,D.Week_Num) AS YRWk,ISNULL(AB.UnCorrectable,0) as UE," & _ "isnull(AB.Correctable,0) as CE, isnull(AB.SYS_Product,'" & Field & "'" & _ ") as SYS_Product from AHS_Dates D Left Join (select * from P_tot where " & _ "SYS_Product = '" & Field & "'" & _ " ) AB on AB.Year_=D.Year_ and AB.Week=D.Week_Num order by YRWk" FetchData2.Open strSQLCustomers, openConnection, adOpenStatic, adLockReadOnly If FetchData2.RecordCount > 0 Then **objExcel.ActiveWorkbook.Connections.Add SheetName, "", _ Array(Array( _ "ODBC;DRIVER=SQL Server Native Client 10.0;SERVER=" & sServerIP & ";TimeOut=5000000; Trusted_Connection=Yes;Integrated Security=SSPI;" _ ), Array("DATABASE=" & sDataBaseName & ";")), Array(strSQLCustomers), 2** objExcel.ActiveWorkbook.PivotCaches.Create(SourceType:=xlExternal, SourceData:= _ objExcel.ActiveWorkbook.Connections(SheetName), Version:= _ xlPivotTableVersion14).CreatePivotTable TableDestination:=objExcel.Sheets(SheetNumber).Name & "!R3C7", _ TableName:="PivotTable" & SheetNumber, DefaultVersion:=xlPivotTableVersion14 Set ws = objExcel.ActiveWorkbook.Worksheets(objExcel.Sheets(SheetNumber).Name) objExcel.Cells(3, 7).Select ws.Shapes.AddChart.Select objExcel.ActiveWorkbook.ActiveChart.ChartType = xlAreaStacked objExcel.ActiveWorkbook.ActiveChart.SetSourceData Source:=ws.Range(objExcel.Sheets(SheetNumber).Name & "!$G$3:$I$20") With ws.PivotTables("PivotTable1").PivotFields("SYS_PRoduct") .Orientation = xlColumnField .Position = 1 End With With ws.PivotTables("PivotTable1").PivotFields("YRWk") .Orientation = xlRowField .Position = 1 End With ' With ws.PivotTables("PivotTable1").PivotFields("Year_") ' .Orientation = xlRowField ' .Position = 2 ' End With objExcel.ActiveWorkbook.ActiveChart.ChartTitle.Text = " Errors by Week and Year -ALLWEEKS" ws.PivotTables("PivotTable1").AddDataField ws.PivotTables( _ "PivotTable1").PivotFields("UE"), "Sum of UnCorrectable", xlSum ws.PivotTables("PivotTable1").AddDataField ws.PivotTables( _ "PivotTable1").PivotFields("CE"), "Sum of Correctable", xlSum End If ''MsgBox (FetchData2.RecordCount) FetchData2.Close I have used the same pivot chart + table in other slides. The problem I think is the query length My question: 1.Is there a better way for me to access the query results. Would appreciate the steps if any. 2.If I can make it a procedure how do I modify the pivot chart/table creation. Thanks. The query results with all 52 weeks: Week UE CE SYS_Product(or Field) 1 0 0 Gen8 2 0 0 Gen8 3 0 0 Gen8 4 0 0 Gen8 5 0 0 Gen8 6 0 0 Gen8

    Read the article

  • Database advantages? Access, MySQL, msSQL, or any others?

    - by JimZ
    Dear all Stackoverflowers, I just started to learn programming and now I'm putting this question online based on a quote: no question is silly My work needs to develop a order system based on web, which wants a database system. Since using Excel for years as a general office user, I naturally turn this to Access. However, most people say Access is very limited comparing to MySQL or MSSQL, or any other more professional database system. But after developing some functions for my company's order system, I really find Access can fulfill my request. And I also tried MSSQL to develop, which I found it not quite convenient to use. I have searched in stackoverflow and find no general answer about my doubt. Now I am sincerely hoping some experienced and professional developers could clear my doubts. Now I'm listing some Access advantages, which I don't think other database system have. I hope you could help me also find these advantages in others. 1. Access is portable, I can just copy a xxx.accdb file to my company and continue with development. 2. Access is easy to generate helpful table, for example, it will automatically generate a field that can automatically count, could be used as primary key value. 3. it is more compatable with Excel, to display and filter data. 4. importantly, it nerely needs no environment to setup, just needs MS Office to be installed. ............others However, I also find some points that MSSQL is advantaged: 1. security reasons 2. easy to backup, ( just use BACKUP..... sql statement to do it) 3. can edit stored procedure to save some functions to database ...............others specifically, I wish some friends could tell me how to make other database portable? since I usually work both at home and in office. It's a headache to move MSSQL work to my office, since the version of MSSQL is not the same. Thank you all and best regards, :)

    Read the article

  • Passing values between pages in JavaScript

    - by buni
    using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.Data.SqlClient; using System.Configuration; using System.Text; using System.Web.Services; using System.IO; namespace T_Smade { public partial class ConferenceManagement : System.Web.UI.Page { volatile int i = 0; protected void Page_Load(object sender, EventArgs e) { GetSessionList(); } public void GetSessionList() { string secondResult = ""; string userName = ""; try { if (HttpContext.Current.User.Identity.IsAuthenticated) { userName = HttpContext.Current.User.Identity.Name; } SqlConnection thisConnection = new SqlConnection(@"data Source=ZOLA-PC;AttachDbFilename=D:\2\5.Devp\my DB\ASPNETDB.MDF;Integrated Security=True"); thisConnection.Open(); SqlCommand secondCommand = thisConnection.CreateCommand(); secondCommand.CommandText = "SELECT myApp_Session.session_id FROM myApp_Session, myApp_Role_in_Session where myApp_Role_in_Session.user_name='" + userName + "' and myApp_Role_in_Session.session_id=myApp_Session.session_id"; SqlDataReader secondReader = secondCommand.ExecuteReader(); while (secondReader.Read()) { secondResult = secondResult + secondReader["session_id"].ToString() + ";"; } secondReader.Close(); SqlCommand thisCommand = thisConnection.CreateCommand(); thisCommand.CommandText = "SELECT * FROM myApp_Session;"; SqlDataReader thisReader = thisCommand.ExecuteReader(); while (thisReader.Read()) { test.Controls.Add(GetLabel(thisReader["session_id"].ToString(), thisReader["session_name"].ToString())); string[] compare = secondResult.Split(';'); foreach (string word in compare) { if (word == thisReader["session_id"].ToString()) { test.Controls.Add(GetButton(thisReader["session_name"].ToString(), "Join Session")); } } } thisReader.Close(); thisConnection.Close(); } catch (SqlException ex) { } } private Button GetButton(string id, string name) { Button b = new Button(); b.Text = name; b.ID = "Button_" + id + i; b.Command += new CommandEventHandler(Button_Click); b.CommandArgument = id; i++; return b; } private Label GetLabel(string id, string name) { Label tb = new Label(); tb.Text = name; tb.ID = id; return tb; } protected void Button_Click(object sender, CommandEventArgs e) { Response.Redirect("EnterSession.aspx?session=" + e.CommandArgument.ToString()); } } I have this code when a user clicks a button www.mypage/EnterSession.aspx?session=session_name in the EnterSession.aspx i have used the code below to track the current URL _gaq.push(['pageTrackerTime._trackEvent', 'Category', 'Action', document.location.href, roundleaveSiteEnd]); Now I would also like to track in the Action parameter the session_name from the previous page.see the code below from the previous page test.Controls.Add(GetButton(thisReader["session_name"].ToString(), "Join Session")); Some idea how to do it? Thanx

    Read the article

  • Solution: Testing Web Services with MSTest on Team Build

    - by Martin Hinshelwood
    Guess what. About 20 minutes after I fixed the build, Allan broke it again! Update: 4th March 2010 – After having huge problems getting this working I read Billy Wang’s post which showed me the light. The problem here is that even though the test passes locally it will not during an Automated Build. When you send your tests to the build server it does not understand that you want to spin up the web site and run tests against that! When you run the test in Visual Studio it spins up the web site anyway, but would you expect your test to pass if you told the website not to spin up? Of course not. So, when you send the code to the build server you need to tell it what to spin up. First, the best way to get the parameters you need is to right click on the method you want to test and select “Create Unit Test”. This will detect wither you are running in IIS or ASP.NET Development Server or None, and create the relevant tags. Figure: Right clicking on “SaveDefaultProjectFile” will produce a context menu with “Create Unit tests…” on it. If you use this option it will AutoDetect most of the Attributes that are required. /// <summary> ///A test for SSW.SQLDeploy.SilverlightUI.Web.Services.IProfileService.SaveDefaultProjectFile ///</summary> // TODO: Ensure that the UrlToTest attribute specifies a URL to an ASP.NET page (for example, // http://.../Default.aspx). This is necessary for the unit test to be executed on the web server, // whether you are testing a page, web service, or a WCF service. [TestMethod()] [HostType("ASP.NET")] [AspNetDevelopmentServerHost("D:\\Workspaces\\SSW\\SSW\\SqlDeploy\\DEV\\Main\\SSW.SQLDeploy.SilverlightUI.Web", "/")] [UrlToTest("http://localhost:3100/")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] public void SaveDefaultProjectFileTest() { IProfileService target = new ProfileService(); // TODO: Initialize to an appropriate value string strComputerName = string.Empty; // TODO: Initialize to an appropriate value bool expected = false; // TODO: Initialize to an appropriate value bool actual; actual = target.SaveDefaultProjectFile(strComputerName); Assert.AreEqual(expected, actual); Assert.Inconclusive("Verify the correctness of this test method."); } Figure: Auto created code that shows the attributes required to run correctly in IIS or in this case ASP.NET Development Server If you are a purist and don’t like creating unit tests like this then you just need to add the three attributes manually. HostType – This attribute specified what host to use. Its an extensibility point, so you could write your own. Or you could just use “ASP.NET”. UrlToTest – This specifies the start URL. For most tests it does not matter which page you call, as long as it is a valid page otherwise your test may not run on the server, but may pass anyway. AspNetDevelopmentServerHost – This is a nasty one, it is only used if you are using ASP.NET Development Host and is unnecessary if you are using IIS. This sets the host settings and the first value MUST be the physical path to the root of your web application. OK, so all that was rubbish and I could not get anything working using the MSDN documentation. Google provided very little help until I ran into Billy Wang’s post  and I heard that heavenly music that all developers hear when understanding dawns that what they have been doing up until now is just plain stupid. I am sure that the above will work when I am doing Web Unit Tests, but there is a much easier way when doing web services. You need to add the AspNetDevelopmentServer attribute to your code. This will tell MSTest to spin up an ASP.NET Development server to host the service. Specify the path to the web application you want to use. [AspNetDevelopmentServer("WebApp1", "D:\\Workspaces\\SSW\\SSW\\SqlDeploy\\DEV\\Main\\SSW.SQLDeploy.SilverlightUI.Web")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] [TestMethod] public void ProfileService_Integration_SaveDefaultProjectFile_Returns_True() { ProfileServiceClient target = new ProfileServiceClient(); bool isTrue = target.SaveDefaultProjectFile("Mav"); Assert.AreEqual(true, isTrue); } Figure: This AspNetDevelopmentServer will make sure that the specified web application is launched. Now we can run the test and have it pass, but if the dynamically assigned ASP.NET Development server port changes what happens to the details in your app.config that was generated when creating a reference to the web service? Well, it would be wrong and the test would fail. This is where Billy’s helper method comes in. Once you have created an instance of your service call, and it has loaded the config, but before you make any calls to it you need to go in and dynamically set the Endpoint address to the same address as your dynamically hosted Web Application. using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.VisualStudio.TestTools.UnitTesting; using System.Reflection; using System.ServiceModel.Description; using System.ServiceModel; namespace SSW.SQLDeploy.Test { class WcfWebServiceHelper { public static bool TryUrlRedirection(object client, TestContext context, string identifier) { bool result = true; try { PropertyInfo property = client.GetType().GetProperty("Endpoint"); string webServer = context.Properties[string.Format("AspNetDevelopmentServer.{0}", identifier)].ToString(); Uri webServerUri = new Uri(webServer); ServiceEndpoint endpoint = (ServiceEndpoint)property.GetValue(client, null); EndpointAddressBuilder builder = new EndpointAddressBuilder(endpoint.Address); builder.Uri = new Uri(endpoint.Address.Uri.OriginalString.Replace(endpoint.Address.Uri.Authority, webServerUri.Authority)); endpoint.Address = builder.ToEndpointAddress(); } catch (Exception e) { context.WriteLine(e.Message); result = false; } return result; } } } Figure: This fixes a problem with the URL in your web.config not being the same as the dynamically hosted ASP.NET Development server port. We can now add a call to this method after we created the Proxy object and change the Endpoint for the Service to the correct one. This process is wrapped in an assert as if it fails there is no point in continuing. [AspNetDevelopmentServer("WebApp1", D:\\Workspaces\\SSW\\SSW\\SqlDeploy\\DEV\\Main\\SSW.SQLDeploy.SilverlightUI.Web")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] [TestMethod] public void ProfileService_Integration_SaveDefaultProjectFile_Returns_True() { ProfileServiceClient target = new ProfileServiceClient(); Assert.IsTrue(WcfWebServiceHelper.TryUrlRedirection(target, TestContext, "WebApp1")); bool isTrue = target.SaveDefaultProjectFile("Mav"); Assert.AreEqual(true, isTrue); } Figure: Editing the Endpoint from the app.config on the fly to match the dynamically hosted ASP.NET Development Server URL and port is now easy. As you can imagine AspNetDevelopmentServer poses some problems of you have multiple developers. What are the chances of everyone using the same location to store the source? What about if you are using a build server, how do you tell MSTest where to look for the files? To the rescue is a property called" “%PathToWebRoot%” which is always right on the build server. It will always point to your build drop folder for your solutions web sites. Which will be “\\tfs.ssw.com.au\BuildDrop\[BuildName]\Debug\_PrecompiledWeb\” or whatever your build drop location is. So lets change the code above to add this. [AspNetDevelopmentServer("WebApp1", "%PathToWebRoot%\\SSW.SQLDeploy.SilverlightUI.Web")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] [TestMethod] public void ProfileService_Integration_SaveDefaultProjectFile_Returns_True() { ProfileServiceClient target = new ProfileServiceClient(); Assert.IsTrue(WcfWebServiceHelper.TryUrlRedirection(target, TestContext, "WebApp1")); bool isTrue = target.SaveDefaultProjectFile("Mav"); Assert.AreEqual(true, isTrue); } Figure: Adding %PathToWebRoot% to the AspNetDevelopmentServer path makes it work everywhere. Now we have another problem… this will ONLY run on the build server and will fail locally as %PathToWebRoot%’s default value is “C:\Users\[profile]\Documents\Visual Studio 2010\Projects”. Well this sucks… How do we get the test to run on any build server and any developer laptop. Open “Tools | Options | Test Tools | Test Execution” in Visual Studio and you will see a field called “Web application root directory”. This is where you override that default above. Figure: You can override the default website location for tests. In my case I would put in “D:\Workspaces\SSW\SSW\SqlDeploy\DEV\Main” and all the developers working with this branch would put in the folder that they have mapped. Can you see a problem? What is I create a “$/SSW/SqlDeploy/DEV/34567” branch from Main and I want to run tests in there. Well… I would have to change the value above. This is not ideal, but as you can put your projects anywhere on a computer, it has to be done. Conclusion Although this looks convoluted and complicated there are real problems being solved here that mean that you have a test ANYWHERE solution. Any build server, any Developer workstation. Resources: http://billwg.blogspot.com/2009/06/testing-wcf-web-services.html http://tough-to-find.blogspot.com/2008/04/testing-asmx-web-services-in-visual.html http://msdn.microsoft.com/en-us/library/ms243399(VS.100).aspx http://blogs.msdn.com/dscruggs/archive/2008/09/29/web-tests-unit-tests-the-asp-net-development-server-and-code-coverage.aspx http://www.5z5.com/News/?543f8bc8b36b174f Technorati Tags: VS2010,MSTest,Team Build 2010,Team Build,Visual Studio,Visual Studio 2010,Visual Studio ALM,Team Test,Team Test 2010

    Read the article

  • Letter to Ballmer: Making Better Consumer Devices

    - by andrewbrust
    Last year, I wrote Steve Ballmer an email, and he was kind enough to write me back.  The email contained a scan of a column I wrote praising Microsoft’s BI strategy.  His reply contained three simple words: “Super nice  thanks.” Well, now I’d like to write to Steve again, in an open letter format, and this time the love may be a bit tougher.  But I’m still super earnest. The past two days have been eventful ones for Microsoft: The company announced the departure of company veterans Robbie Bach and J Allard and the market announced Apple is now besting Microsoft in market capitalization. Plus, announcements were made that make it plain that Ballmer will, in effect, be running Microsoft’s Entertainment & Devices division himself. With that in mind, I’d like to offer my list of a dozen things I think Microsoft’s CEO should do to improve that division’s offerings and, hopefully, its bottom line. So here goes:   1. On Windows Phone 7, Stay the Course The press is teeming with headlines and reader comments proclaiming the death-before-arrival of Windows Phone 7.  That’s plain silly.  You’ve got the makings of a great and unique SmartPhone platform, and you’re the only company (even considering RIM) that can offer full fidelity Exchange integration, not to mention implementing Office on the device.  Let the existing team finish this puppy and ship it. And then have them pump out a few updates, over-the-air, quickly.  Show them that Google Android’s not the only product that can do good, rapid dot releases. And another thing: make sure your OEMs’ devices have flawless touch screens.  If they don’t, then you shouldn’t certify them for delivery to customers.  Period. Oh, and kill the Kin, quietly.  It was DOA, and you know it.   2. Move Media Center to the Xbox Platform Media Center is, at its core, a good product.  But delivering a media distribution and DVR platform on a sophisticated PC operating system like Windows 7 just creates too many moving parts.  Xbox already functions as the best Media Center extender device – it should actually be the hub as well. Media Center is mostly based on .NET code – and XNA is a .NET environment for Xbox – find a way to bridge that small gap and make Media Center a joy to work with instead of a frustration.  Beating Apple TV out of this sub-market is the lowest hanging fruit on the tree (goofy pun, but it’s true).   3. Integrate Media Center with Mediaroom, or Kill the Latter You have two media products with almost identical names.  One is for standalone DVRs and the other is for IPTV cable set tops with DVR capabilities.  Can we merge these please?  My previous request of putting Media Center on Xbox would seem to tie into this nicely, since you’ve announced plans to do that with Mediaroom already.   4. Fix the Red Ring of Death People love the Xbox, but they really don’t love sending their consoles back every 18-24 months, when they get a bunch of red lights flashing on power up.  You’ve handled this defect about as gracefully as possible, but it’s been around for a long time now and it doesn’t seem to be fixed yet.  You can do better.  In fact, you must do better, or you insult your customers.   5. Add Blu Ray to Xbox I know, streaming movies are the future; physical media is legacy technology.  So if that’s true, why did you back HD DVD so hard?  You know why: for now, the film studios won’t allow a large selection of new release, HD, surround sound content be distributed on any medium other than Blu Ray or cable pay per view/on-demand.  Don’t you want home theater buffs to see the Xbox as a fantastic device for their rigs?  Don’t you want to put PlayStation 3 out of its misery?  And if you follow my suggestions above (move Media Center to the Xbox and fix the Red Ring problem), you’d have it all sewn up.  Do I think Blu Ray functionality will move a lot of units?  No.  Do I think that it would move more units with desperately needed influential home theater consumers?  You bet.  And you might sell more ZunePass subscriptions in the process. But while you’re at it, make the fan quieter, please.   6. Make More of Windows Home Server Home Server is a fantastic product.  And for reasons unknown to me, it seems like you’re letting it languish.  Development of the add-in ecosystem seems underfunded.  WHS’ unparalleled ease of use and reliability for home PC backup (and emergency restores) goes unsung.  Product cycles are slow.  Support for your OEMs, who are doing great work, especially in the green space with Atom CPUs, seems lacking.  You’ve married a trophy girl and you keep her cloistered at home!  That’s cruel, unusual and, um, incredibly ill-advised.  Make use of this ace card, and while you’re at it, give it real integration with Media Center.  The integration thus far proof-of-concept quality.  You should go way past that – both products will benefit immeasurably.   7. Set Up a Partner Platform for Custom Installers There’s a whole sub-industry of companies that install, integrate and configure home theater, security and connected home products.  They have an industry group. They are influential in the high-end of the consumer electronics industry, and so are their customers.  They love Media Center and they love Windows Home Server.  But I have talked to several of them at the Consumer Electronics Show and they tell me you don’t love them.  They find it very difficult to do business with Microsoft, even though they want nothing more than to sell and evangelize your platform.  This is a travesty.  Please fix it.  Get Allison Watson and the Microsoft Partner Network on board and have her hire someone who knows how to run a channel program for consumer electronics companies.  Problem solved.  Markets expanded.   8. Make Your Own Hardware In other areas, I know you love your partners.  I help run one, so I appreciate that.  But when it came to Xbox and Zune you built them it yourself (albeit on a contract basis, which is fine).  Windows Phone 7 has a chance to work as an OEM play, but it would work better if you produced the devices.  At least consider building a reference device that sells alongside your OEMs’ offerings.  That’s what Google did with the Nexxus One.  And while that phone was not itself a big seller, it catalyzed two wonderful things : (1) a quality bar was set and (2) partners exceeded it.  Before the Nexxus One, the best Android handset out there was the Motorola Droid. The Nexxus One was better, and the HTC Droid Incredible and Evo 4G are now even better than Google’s phone, which is why Verizon and Sprint decided not to carry it.  Imagine if all Windows Phone 6.x devices were on par with the HTC HD2.  I tend to believe you’d have a lot bigger market share than you do now.   9. Continue with Your Retail Initiative From what I hear, it sounds like it’s going well.  And this goes right along with making your own hardware.  When you build it, they will come.  And then it makes the likes of Best Buy and Staples do better.   10. Make an Acquisition (or Two) TiVo and/or Moxi look ripe for the picking.  With their ability to build stuff people love and your ability to run a business, you might just have something.  But do a better job than you did when you bought Danger.  Buy the ideas, not just the customers, eh?   11. Make Beautiful Stuff You’ve heard this one before, I know.  But I have some head-shrinking advice on this one.  You know that Apple obsesses over its industrial design.  You know that appeals to consumers.  But it seems you think doing so is Apple’s game exclusively and so you shouldn’t even try.  Bull dinky.  Come to New York and visit the Museum of Modern Art’s Architecture and Design gallery.  You’ll see that lots of companies and product categories have had very high design value well before Apple existed.  You can do this, and the Zune HD was a great start.  Now run with that.  Find those negative voices in your head that are telling you that you can’t and shut them up.  For good.   12. Burst the Bubble Some of the products you’ve built seem like they were conceived in a bizarro world.  That would appear to be the result of groupthink.  You must do better.  And there’s lots of people willing to advise you.  This includes just about everyone in the Regional Director program, and probably a bunch of MVPs.  Heck, I bet the guys at Engadget could help out too.  Imagine if you let them see the Kin before it shipped.  Talk to high-end gear consumers.  Talk to Best Buy and CostCo customers too.   Signing Off I hope this was of value to you.  As I wrote this I kept telling myself how obvious, even trite, some of these pieces of advice were and then, because of that, doubting they’d really help.  But I decided that they must not be obvious to Microsoft.  Sometimes when you get wrapped up in stuff, it’s hard to clear your head.  I think my head’s pretty clear here though (I’m wrapped up in other stuff), so maybe my perspective can help.  If not, well, then, I guess they all can’t be super nice.

    Read the article

  • SQL SERVER – Guest Posts – Feodor Georgiev – The Context of Our Database Environment – Going Beyond the Internal SQL Server Waits – Wait Type – Day 21 of 28

    - by pinaldave
    This guest post is submitted by Feodor. Feodor Georgiev is a SQL Server database specialist with extensive experience of thinking both within and outside the box. He has wide experience of different systems and solutions in the fields of architecture, scalability, performance, etc. Feodor has experience with SQL Server 2000 and later versions, and is certified in SQL Server 2008. In this article Feodor explains the server-client-server process, and concentrated on the mutual waits between client and SQL Server. This is essential in grasping the concept of waits in a ‘global’ application plan. Recently I was asked to write a blog post about the wait statistics in SQL Server and since I had been thinking about writing it for quite some time now, here it is. It is a wide-spread idea that the wait statistics in SQL Server will tell you everything about your performance. Well, almost. Or should I say – barely. The reason for this is that SQL Server is always a part of a bigger system – there are always other players in the game: whether it is a client application, web service, any other kind of data import/export process and so on. In short, the SQL Server surroundings look like this: This means that SQL Server, aside from its internal waits, also depends on external waits and settings. As we can see in the picture above, SQL Server needs to have an interface in order to communicate with the surrounding clients over the network. For this communication, SQL Server uses protocol interfaces. I will not go into detail about which protocols are best, but you can read this article. Also, review the information about the TDS (Tabular data stream). As we all know, our system is only as fast as its slowest component. This means that when we look at our environment as a whole, the SQL Server might be a victim of external pressure, no matter how well we have tuned our database server performance. Let’s dive into an example: let’s say that we have a web server, hosting a web application which is using data from our SQL Server, hosted on another server. The network card of the web server for some reason is malfunctioning (think of a hardware failure, driver failure, or just improper setup) and does not send/receive data faster than 10Mbs. On the other end, our SQL Server will not be able to send/receive data at a faster rate either. This means that the application users will notify the support team and will say: “My data is coming very slow.” Now, let’s move on to a bit more exciting example: imagine that there is a similar setup as the example above – one web server and one database server, and the application is not using any stored procedure calls, but instead for every user request the application is sending 80kb query over the network to the SQL Server. (I really thought this does not happen in real life until I saw it one day.) So, what happens in this case? To make things worse, let’s say that the 80kb query text is submitted from the application to the SQL Server at least 100 times per minute, and as often as 300 times per minute in peak times. Here is what happens: in order for this query to reach the SQL Server, it will have to be broken into a of number network packets (according to the packet size settings) – and will travel over the network. On the other side, our SQL Server network card will receive the packets, will pass them to our network layer, the packets will get assembled, and eventually SQL Server will start processing the query – parsing, allegorizing, generating the query execution plan and so on. So far, we have already had a serious network overhead by waiting for the packets to reach our Database Engine. There will certainly be some processing overhead – until the database engine deals with the 80kb query and its 20 subqueries. The waits you see in the DMVs are actually collected from the point the query reaches the SQL Server and the packets are assembled. Let’s say that our query is processed and it finally returns 15000 rows. These rows have a certain size as well, depending on the data types returned. This means that the data will have converted to packages (depending on the network size package settings) and will have to reach the application server. There will also be waits, however, this time you will be able to see a wait type in the DMVs called ASYNC_NETWORK_IO. What this wait type indicates is that the client is not consuming the data fast enough and the network buffers are filling up. Recently Pinal Dave posted a blog on Client Statistics. What Client Statistics does is captures the physical flow characteristics of the query between the client(Management Studio, in this case) and the server and back to the client. As you see in the image, there are three categories: Query Profile Statistics, Network Statistics and Time Statistics. Number of server roundtrips–a roundtrip consists of a request sent to the server and a reply from the server to the client. For example, if your query has three select statements, and they are separated by ‘GO’ command, then there will be three different roundtrips. TDS Packets sent from the client – TDS (tabular data stream) is the language which SQL Server speaks, and in order for applications to communicate with SQL Server, they need to pack the requests in TDS packets. TDS Packets sent from the client is the number of packets sent from the client; in case the request is large, then it may need more buffers, and eventually might even need more server roundtrips. TDS packets received from server –is the TDS packets sent by the server to the client during the query execution. Bytes sent from client – is the volume of the data set to our SQL Server, measured in bytes; i.e. how big of a query we have sent to the SQL Server. This is why it is best to use stored procedures, since the reusable code (which already exists as an object in the SQL Server) will only be called as a name of procedure + parameters, and this will minimize the network pressure. Bytes received from server – is the amount of data the SQL Server has sent to the client, measured in bytes. Depending on the number of rows and the datatypes involved, this number will vary. But still, think about the network load when you request data from SQL Server. Client processing time – is the amount of time spent in milliseconds between the first received response packet and the last received response packet by the client. Wait time on server replies – is the time in milliseconds between the last request packet which left the client and the first response packet which came back from the server to the client. Total execution time – is the sum of client processing time and wait time on server replies (the SQL Server internal processing time) Here is an illustration of the Client-server communication model which should help you understand the mutual waits in a client-server environment. Keep in mind that a query with a large ‘wait time on server replies’ means the server took a long time to produce the very first row. This is usual on queries that have operators that need the entire sub-query to evaluate before they proceed (for example, sort and top operators). However, a query with a very short ‘wait time on server replies’ means that the query was able to return the first row fast. However a long ‘client processing time’ does not necessarily imply the client spent a lot of time processing and the server was blocked waiting on the client. It can simply mean that the server continued to return rows from the result and this is how long it took until the very last row was returned. The bottom line is that developers and DBAs should work together and think carefully of the resource utilization in the client-server environment. From experience I can say that so far I have seen only cases when the application developers and the Database developers are on their own and do not ask questions about the other party’s world. I would recommend using the Client Statistics tool during new development to track the performance of the queries, and also to find a synchronous way of utilizing resources between the client – server – client. Here is another example: think about similar setup as above, but add another server to the game. Let’s say that we keep our media on a separate server, and together with the data from our SQL Server we need to display some images on the webpage requested by our user. No matter how simple or complicated the logic to get the images is, if the images are 500kb each our users will get the page slowly and they will still think that there is something wrong with our data. Anyway, I don’t mean to get carried away too far from SQL Server. Instead, what I would like to say is that DBAs should also be aware of ‘the big picture’. I wrote a blog post a while back on this topic, and if you are interested, you can read it here about the big picture. And finally, here are some guidelines for monitoring the network performance and improving it: Run a trace and outline all queries that return more than 1000 rows (in Profiler you can actually filter and sort the captured trace by number of returned rows). This is not a set number; it is more of a guideline. The general thought is that no application user can consume that many rows at once. Ask yourself and your fellow-developers: ‘why?’. Monitor your network counters in Perfmon: Network Interface:Output queue length, Redirector:Network errors/sec, TCPv4: Segments retransmitted/sec and so on. Make sure to establish a good friendship with your network administrator (buy them coffee, for example J ) and get into a conversation about the network settings. Have them explain to you how the network cards are setup – are they standalone, are they ‘teamed’, what are the settings – full duplex and so on. Find some time to read a bit about networking. In this short blog post I hope I have turned your attention to ‘the big picture’ and the fact that there are other factors affecting our SQL Server, aside from its internal workings. As a further reading I would still highly recommend the Wait Stats series on this blog, also I would recommend you have the coffee break conversation with your network admin as soon as possible. This guest post is written by Feodor Georgiev. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL

    Read the article

  • SQL SERVER – Guest Post – Jonathan Kehayias – Wait Type – Day 16 of 28

    - by pinaldave
    Jonathan Kehayias (Blog | Twitter) is a MCITP Database Administrator and Developer, who got started in SQL Server in 2004 as a database developer and report writer in the natural gas industry. After spending two and a half years working in TSQL, in late 2006, he transitioned to the role of SQL Database Administrator. His primary passion is performance tuning, where he frequently rewrites queries for better performance and performs in depth analysis of index implementation and usage. Jonathan blogs regularly on SQLBlog, and was a coauthor of Professional SQL Server 2008 Internals and Troubleshooting. On a personal note, I think Jonathan is extremely positive person. In every conversation with him I have found that he is always eager to help and encourage. Every time he finds something needs to be approved, he has contacted me without hesitation and guided me to improve, change and learn. During all the time, he has not lost his focus to help larger community. I am honored that he has accepted to provide his views on complex subject of Wait Types and Queues. Currently I am reading his series on Extended Events. Here is the guest blog post by Jonathan: SQL Server troubleshooting is all about correlating related pieces of information together to indentify where exactly the root cause of a problem lies. In my daily work as a DBA, I generally get phone calls like, “So and so application is slow, what’s wrong with the SQL Server.” One of the funny things about the letters DBA is that they go so well with Default Blame Acceptor, and I really wish that I knew exactly who the first person was that pointed that out to me, because it really fits at times. A lot of times when I get this call, the problem isn’t related to SQL Server at all, but every now and then in my initial quick checks, something pops up that makes me start looking at things further. The SQL Server is slow, we see a number of tasks waiting on ASYNC_IO_COMPLETION, IO_COMPLETION, or PAGEIOLATCH_* waits in sys.dm_exec_requests and sys.dm_exec_waiting_tasks. These are also some of the highest wait types in sys.dm_os_wait_stats for the server, so it would appear that we have a disk I/O bottleneck on the machine. A quick check of sys.dm_io_virtual_file_stats() and tempdb shows a high write stall rate, while our user databases show high read stall rates on the data files. A quick check of some performance counters and Page Life Expectancy on the server is bouncing up and down in the 50-150 range, the Free Page counter consistently hits zero, and the Free List Stalls/sec counter keeps jumping over 10, but Buffer Cache Hit Ratio is 98-99%. Where exactly is the problem? In this case, which happens to be based on a real scenario I faced a few years back, the problem may not be a disk bottleneck at all; it may very well be a memory pressure issue on the server. A quick check of the system spec’s and it is a dual duo core server with 8GB RAM running SQL Server 2005 SP1 x64 on Windows Server 2003 R2 x64. Max Server memory is configured at 6GB and we think that this should be enough to handle the workload; or is it? This is a unique scenario because there are a couple of things happening inside of this system, and they all relate to what the root cause of the performance problem is on the system. If we were to query sys.dm_exec_query_stats for the TOP 10 queries, by max_physical_reads, max_logical_reads, and max_worker_time, we may be able to find some queries that were using excessive I/O and possibly CPU against the system in their worst single execution. We can also CROSS APPLY to sys.dm_exec_sql_text() and see the statement text, and also CROSS APPLY sys.dm_exec_query_plan() to get the execution plan stored in cache. Ok, quick check, the plans are pretty big, I see some large index seeks, that estimate 2.8GB of data movement between operators, but everything looks like it is optimized the best it can be. Nothing really stands out in the code, and the indexing looks correct, and I should have enough memory to handle this in cache, so it must be a disk I/O problem right? Not exactly! If we were to look at how much memory the plan cache is taking by querying sys.dm_os_memory_clerks for the CACHESTORE_SQLCP and CACHESTORE_OBJCP clerks we might be surprised at what we find. In SQL Server 2005 RTM and SP1, the plan cache was allowed to take up to 75% of the memory under 8GB. I’ll give you a second to go back and read that again. Yes, you read it correctly, it says 75% of the memory under 8GB, but you don’t have to take my word for it, you can validate this by reading Changes in Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2. In this scenario the application uses an entirely adhoc workload against SQL Server and this leads to plan cache bloat, and up to 4.5GB of our 6GB of memory for SQL can be consumed by the plan cache in SQL Server 2005 SP1. This in turn reduces the size of the buffer cache to just 1.5GB, causing our 2.8GB of data movement in this expensive plan to cause complete flushing of the buffer cache, not just once initially, but then another time during the queries execution, resulting in excessive physical I/O from disk. Keep in mind that this is not the only query executing at the time this occurs. Remember the output of sys.dm_io_virtual_file_stats() showed high read stalls on the data files for our user databases versus higher write stalls for tempdb? The memory pressure is also forcing heavier use of tempdb to handle sorting and hashing in the environment as well. The real clue here is the Memory counters for the instance; Page Life Expectancy, Free List Pages, and Free List Stalls/sec. The fact that Page Life Expectancy is fluctuating between 50 and 150 constantly is a sign that the buffer cache is experiencing constant churn of data, once every minute to two and a half minutes. If you add to the Page Life Expectancy counter, the consistent bottoming out of Free List Pages along with Free List Stalls/sec consistently spiking over 10, and you have the perfect memory pressure scenario. All of sudden it may not be that our disk subsystem is the problem, but is instead an innocent bystander and victim. Side Note: The Page Life Expectancy counter dropping briefly and then returning to normal operating values intermittently is not necessarily a sign that the server is under memory pressure. The Books Online and a number of other references will tell you that this counter should remain on average above 300 which is the time in seconds a page will remain in cache before being flushed or aged out. This number, which equates to just five minutes, is incredibly low for modern systems and most published documents pre-date the predominance of 64 bit computing and easy availability to larger amounts of memory in SQL Servers. As food for thought, consider that my personal laptop has more memory in it than most SQL Servers did at the time those numbers were posted. I would argue that today, a system churning the buffer cache every five minutes is in need of some serious tuning or a hardware upgrade. Back to our problem and its investigation: There are two things really wrong with this server; first the plan cache is excessively consuming memory and bloated in size and we need to look at that and second we need to evaluate upgrading the memory to accommodate the workload being performed. In the case of the server I was working on there were a lot of single use plans found in sys.dm_exec_cached_plans (where usecounts=1). Single use plans waste space in the plan cache, especially when they are adhoc plans for statements that had concatenated filter criteria that is not likely to reoccur with any frequency.  SQL Server 2005 doesn’t natively have a way to evict a single plan from cache like SQL Server 2008 does, but MVP Kalen Delaney, showed a hack to evict a single plan by creating a plan guide for the statement and then dropping that plan guide in her blog post Geek City: Clearing a Single Plan from Cache. We could put that hack in place in a job to automate cleaning out all the single use plans periodically, minimizing the size of the plan cache, but a better solution would be to fix the application so that it uses proper parameterized calls to the database. You didn’t write the app, and you can’t change its design? Ok, well you could try to force parameterization to occur by creating and keeping plan guides in place, or we can try forcing parameterization at the database level by using ALTER DATABASE <dbname> SET PARAMETERIZATION FORCED and that might help. If neither of these help, we could periodically dump the plan cache for that database, as discussed as being a problem in Kalen’s blog post referenced above; not an ideal scenario. The other option is to increase the memory on the server to 16GB or 32GB, if the hardware allows it, which will increase the size of the plan cache as well as the buffer cache. In SQL Server 2005 SP1, on a system with 16GB of memory, if we set max server memory to 14GB the plan cache could use at most 9GB  [(8GB*.75)+(6GB*.5)=(6+3)=9GB], leaving 5GB for the buffer cache.  If we went to 32GB of memory and set max server memory to 28GB, the plan cache could use at most 16GB [(8*.75)+(20*.5)=(6+10)=16GB], leaving 12GB for the buffer cache. Thankfully we have SQL Server 2005 Service Pack 2, 3, and 4 these days which include the changes in plan cache sizing discussed in the Changes to Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2 blog post. In real life, when I was troubleshooting this problem, I spent a week trying to chase down the cause of the disk I/O bottleneck with our Server Admin and SAN Admin, and there wasn’t much that could be done immediately there, so I finally asked if we could increase the memory on the server to 16GB, which did fix the problem. It wasn’t until I had this same problem occur on another system that I actually figured out how to really troubleshoot this down to the root cause.  I couldn’t believe the size of the plan cache on the server with 16GB of memory when I actually learned about this and went back to look at it. SQL Server is constantly telling a story to anyone that will listen. As the DBA, you have to sit back and listen to all that it’s telling you and then evaluate the big picture and how all the data you can gather from SQL about performance relate to each other. One of the greatest tools out there is actually a free in the form of Diagnostic Scripts for SQL Server 2005 and 2008, created by MVP Glenn Alan Berry. Glenn’s scripts collect a majority of the information that SQL has to offer for rapid troubleshooting of problems, and he includes a lot of notes about what the outputs of each individual query might be telling you. When I read Pinal’s blog post SQL SERVER – ASYNC_IO_COMPLETION – Wait Type – Day 11 of 28, I noticed that he referenced Checking Memory Related Performance Counters in his post, but there was no real explanation about why checking memory counters is so important when looking at an I/O related wait type. I thought I’d chat with him briefly on Google Talk/Twitter DM and point this out, and offer a couple of other points I noted, so that he could add the information to his blog post if he found it useful.  Instead he asked that I write a guest blog for this. I am honored to be a guest blogger, and to be able to share this kind of information with the community. The information contained in this blog post is a glimpse at how I do troubleshooting almost every day of the week in my own environment. SQL Server provides us with a lot of information about how it is running, and where it may be having problems, it is up to us to play detective and find out how all that information comes together to tell us what’s really the problem. This blog post is written by Jonathan Kehayias (Blog | Twitter). Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: MVP, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • CloudBerry Online Backup 1.5 for Windows Home Server

    - by The Geek
    Overview CloudBerry Online Backup version 1.5 is a front end application for Amazon S3 storage for backing up your Windows Home Server data. It makes backing up your essential data to Amazon S3 an easy process in the event the disaster strikes. Installation You install the Cloudberry Addin as you do for any addins for Windows Home Server. On a PC on your network, browse to the shared folders on your server and open the Add-Ins folder and copy over WHS_CloudBerryOnlineBackupSetup_v1.5.0.81S3o.msi (link below), then close out of the folder. Next launch the Windows Home Server Console, click Settings, then Add-Ins. Click on the Available tab and click the Install button. It installs very quickly, and when you get the Installation Succeeded dialog click OK. You will lose connection through the Console, just click OK, then reconnect. After reconnecting, you’ll see CloudBerry Backup has been installed, and you can begin using it. You can setup a backup plan right away or find out what’s new with version 1.5. Amazon S3 Account If you don’t already have an Amazon S3 account, you’ll be prompted to create a new one. Click on the Create an account hyperlink, which takes you to the Amazon S3 page where you can sign up. After reviewing the functionality of Amazon S3, click on the Sign Up for Amazon S3 button. Enter in your contact information and accept the Amazon Web Services Customer Agreement. You’re then shown their pricing for storage plans. The amount of storage space you use will depend on your needs. It’s relatively cheap for smaller amounts of data. Just keep in mind the more data you store and download, the more S3 is going to cost. Note: Amazon S3 is introducing Reduced Redundancy Storage which will lower the cost of the data stored on S3. CloudBerry 1.5 will support this new feature. You can find out more about this new pricing structure. Note: Keep in mind that after you first sign up for an Amazon S3 account, it can take up to 24 hours to be authorized. In fact, you may want to sign up for the S3 account before installing the Add-In. After you sign up for your S3 Account, you’ll be given access credentials which you can enter in and create a Storage Bucket name. Features & Use CloudBerry is wizard driven, straight-forward and easy to use. Here we take a look at creating a backup plan. To begin, click on the Setup Backup Plan button to kick off the wizard. Select your backup mode based on the amount of features you want. In our example we’re going to select Advanced Mode as it offers more features than Simple Mode. Select your backup storage account or create a new one. You can select a default account by checking Use currently selected account as default. Now you can go through and select the files and folders you want to backup from your home server. Check the box Show physical drives to get more of a selection of files and folders. This also allows you to backup files from your data drive as well. It has full support for drive extenders so you can backup your shares as well. The cool thing about Cloudberry is it allows you to drill down specific files and folders unlike other WHS backup utilities. Next you can use advanced filters to specify files and/or folders to skip if you want. There are compression and encryption options as well. This will save storage space, bandwidth, and keep your data secure. Purge Options allow you to customize options for getting rid of older files. You can also select the option to delete files from the S3 service that have been deleted locally. Be careful with this option however, as you won’t be able to restore files if you delete them locally. You have some nice scheduling options from running backups manually, specific date and time, or recurring daily, weekly or monthly. Receive email notifications in all cases or when a backup fails. This is a good option so you know if things were successful or something failed, and you need to back it up manually. Email notifications… Give your plan a name… Then if the summary page looks good you can continue, or still go back at this point if something doesn’t look correct and needs adjusting. That’s it! You’re ready to go, and you have an option to start your first backup right away. After you’ve created a backup plan, you can go in and edit, delete, view history, or restore files. Restoring Files using CloudBerry To restore data from your backups kick off the Restore Wizard and select the backup to restore from. You can select the last backup, a specific point in time, or manually browse through the files. Browse through the directory and select the files you need to restore. Choose the destination to restore the files to. You can select from the original location, a specific location, to overwrite existing files, or set the location as the default for future restores. If the files are encrypted, enter in the correct passwords. If the summary looks good, click on Next to start the restore process. You’ll be shown a progress bar at the bottom of the screen while the files are restored. After the process has completed, close out of the Restore Wizard. In this example we restored a couple of music files to the desktop of Windows Home Server… But as shown above you can save them to the original location, other network locations, or WHS shared folders. This can make it a lot easier to keep track of files you’ve restored. You can also access different options for CloudBerry by clicking Settings in WHS Console then CloudBerry Backup. Here you can set up a new storage account, check for updates, app options, Diagnostics, and send feedback. Under Options there are several settings you can tweak to get the best experience for your WHS backups. CloudBerry Web Interface Another nice feature is the CloudBerry Web Interface so you can access your data from anywhere you have an Internet connection. To check it out in WHS Console, click on the Backup Web Interface link…you’ll probably want to bookmark the link in your favorite browser. Note: This feature is still in beta and at the time of this review, the Web Interface wasn’t up and running so we weren’t able to test it out. Performance The Cloudberry app works very well through the Windows Home Server Console. The amount of time it takes to backup or restore your data will depend on the speed of your Internet connection and size of the files. In our tests, backing up 1GB of data to the Amazon S3 account took around an hour, but we were running it on a DSL with limited upload speeds so your mileage will vary. Product Support In our experience, the team at CloudBerry offered great support in a timely manner when contacting them. You can fill out a help request through a form on their website and they also have a community forum. Conclusion We were very pleased with CloudBerry Online Backup for WHS. It’s wizard driven interface makes it extremely easy to use, and offers comprehensive backup choices for your Amazon S3 account. CloudBerry will only backup files that have been modified, so if files haven’t been changed, they won’t be backed up again.They offer a free 15 day trial and is $29.99 after that for a full license. Once you buy the app you own it, and charges to your S3 account will vary depending on the amount of data you upload. If you’re looking for an effective and easy to use front end application to backup your Windows Home Server data to your Amazon S3 account, CloudBerry is a recommended affordable choice. Download CloudBerry for Windows Home Server Sign Up For Amazon S3 Account Rating Installation: 9 Ease of Use: 8 Features: 8 Performance: 8 Product Support: 8 Similar Articles Productive Geek Tips Restore Files from Backups on Windows Home ServerGMedia Blog: Setting Up a Windows Home ServerBackup Windows Home Server Folders to an External Hard DriveBackup Your Windows Home Server Off-Site with Asus WebstorageRemove a Network Computer from Windows Home Server TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Acronis Online Backup Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor Gmail Button Addon (Firefox) Hyperwords addon (Firefox) Backup Outlook 2010 Daily Motivator (Firefox)

    Read the article

  • GuestPost: Unit Testing Entity Framework (v1) Dependent Code using TypeMock Isolator

    - by Eric Nelson
    Time for another guest post (check out others in the series), this time bringing together the world of mocking with the world of Entity Framework. A big thanks to Moses for agreeing to do this. Unit Testing Entity Framework Dependent Code using TypeMock Isolator by Muhammad Mosa Introduction Unit testing data access code in my opinion is a challenging thing. Let us consider unit tests and integration tests. In integration tests you are allowed to have environmental dependencies such as a physical database connection to insert, update, delete or retrieve your data. However when performing unit tests it is often much more efficient and productive to remove environmental dependencies. Instead you will need to fake these dependencies. Faking a database (also known as mocking) can be relatively straight forward but the version of Entity Framework released with .Net 3.5 SP1 has a number of implementation specifics which actually makes faking the existence of a database quite difficult. Faking Entity Framework As mentioned earlier, to effectively unit test you will need to fake/simulate Entity Framework calls to the database. There are many free open source mocking frameworks that can help you achieve this but it will require additional effort to overcome & workaround a number of limitations in those frameworks. Examples of these limitations include: Not able to fake calls to non virtual methods Not able to fake sealed classes Not able to fake LINQ to Entities queries (replace database calls with in-memory collection calls) There is a mocking framework which is flexible enough to handle limitations such as those above. The commercially available TypeMock Isolator can do the job for you with less code and ultimately more readable unit tests. I’m going to demonstrate tackling one of those limitations using MoQ as my mocking framework. Then I will tackle the same issue using TypeMock Isolator. Mocking Entity Framework with MoQ One basic need when faking Entity Framework is to fake the ObjectContext. This cannot be done by passing any connection string. You have to pass a correct Entity Framework connection string that specifies CSDL, SSDL and MSL locations along with a provider connection string. Assuming we are going to do that, we’ll explore another limitation. The limitation we are going to face now is related to not being able to fake calls to non-virtual/overridable members with MoQ. I have the following repository method that adds an EntityObject (instance of a Blog entity) to Blogs entity set in an ObjectContext. public override void Add(Blog blog) { if(BlogContext.Blogs.Any(b=>b.Name == blog.Name)) { throw new InvalidOperationException("Blog with same name already exists!"); } BlogContext.AddToBlogs(blog); } The method does a very simple check that the name of the new Blog entity instance doesn’t exist. This is done through the simple LINQ query above. If the blog doesn’t already exist it simply adds it to the current context to be saved when SaveChanges of the ObjectContext instance (e.g. BlogContext) is called. However, if a blog with the same name exits, and exception (InvalideOperationException) will be thrown. Let us now create a unit test for the Add method using MoQ. [TestMethod] [ExpectedException(typeof(InvalidOperationException))] public void Add_Should_Throw_InvalidOperationException_When_Blog_With_Same_Name_Already_Exits() { //(1) We shouldn't depend on configuration when doing unit tests! But, //its a workaround to fake the ObjectContext string connectionString = ConfigurationManager .ConnectionStrings["MyBlogConnString"] .ConnectionString; //(2) Arrange: Fake ObjectContext var fakeContext = new Mock<MyBlogContext>(connectionString); //(3) Next Line will pass, as ObjectContext now can be faked with proper connection string var repo = new BlogRepository(fakeContext.Object); //(4) Create fake ObjectQuery<Blog>. Will be used to substitute MyBlogContext.Blogs property var fakeObjectQuery = new Mock<ObjectQuery<Blog>>("[Blogs]", fakeContext.Object); //(5) Arrange: Set Expectations //Next line will throw an exception by MoQ: //System.ArgumentException: Invalid setup on a non-overridable member fakeContext.SetupGet(c=>c.Blogs).Returns(fakeObjectQuery.Object); fakeObjectQuery.Setup(q => q.Any(b => b.Name == "NewBlog")).Returns(true); //Act repo.Add(new Blog { Name = "NewBlog" }); } This test method is checking to see if the correct exception ([ExpectedException(typeof(InvalidOperationException))]) is thrown when a developer attempts to Add a blog with a name that’s already exists. On (1) a connection string is initialized from configuration file. To retrieve the full connection string. On (2) a fake ObjectContext is being created. The ObjectContext here is MyBlogContext and its being created using this var fakeContext = new Mock<MyBlogContext>(connectionString); This way a fake context is being created using MoQ. On (3) a BlogRepository instance is created. BlogRepository has dependency on generate Entity Framework ObjectContext, MyObjectContext. And so the fake context is passed to the constructor. var repo = new BlogRepository(fakeContext.Object); On (4) a fake instance of ObjectQuery<Blog> is being created to use as a substitute to MyObjectContext.Blogs property as we will see in (5). On (5) setup an expectation for calling Blogs property of MyBlogContext and substitute the return result with the fake ObjectQuery<Blog> instance created on (4). When you run this test it will fail with MoQ throwing an exception because of this line: fakeContext.SetupGet(c=>c.Blogs).Returns(fakeObjectQuery.Object); This happens because the generate property MyBlogContext.Blogs is not virtual/overridable. And assuming it is virtual or you managed to make it virtual it will fail at the following line throwing the same exception: fakeObjectQuery.Setup(q => q.Any(b => b.Name == "NewBlog")).Returns(true); This time the test will fail because the Any extension method is not virtual/overridable. You won’t be able to replace ObjectQuery<Blog> with fake in memory collection to test your LINQ to Entities queries. Now lets see how replacing MoQ with TypeMock Isolator can help. Mocking Entity Framework with TypeMock Isolator The following is the same test method we had above for MoQ but this time implemented using TypeMock Isolator: [TestMethod] [ExpectedException(typeof(InvalidOperationException))] public void Add_New_Blog_That_Already_Exists_Should_Throw_InvalidOperationException() { //(1) Create fake in memory collection of blogs var fakeInMemoryBlogs = new List<Blog> {new Blog {Name = "FakeBlog"}}; //(2) create fake context var fakeContext = Isolate.Fake.Instance<MyBlogContext>(); //(3) Setup expected call to MyBlogContext.Blogs property through the fake context Isolate.WhenCalled(() => fakeContext.Blogs) .WillReturnCollectionValuesOf(fakeInMemoryBlogs.AsQueryable()); //(4) Create new blog with a name that already exits in the fake in memory collection in (1) var blog = new Blog {Name = "FakeBlog"}; //(5) Instantiate instance of BlogRepository (Class under test) var repo = new BlogRepository(fakeContext); //(6) Acting by adding the newly created blog () repo.Add(blog); } When running the above test method it will pass as the Add method of BlogRepository is going to throw an InvalidOperationException which is the expected behaviour. Nothing prevents us from faking out the database interaction! Even faking ObjectContext  at (2) didn’t require a connection string. On (3) Isolator sets up a faking result for MyBlogContext.Blogs when its being called through the fake instance fakeContext created on (2). The faking result is just an in-memory collection declared an initialized on (1). Finally at (6) action we call the Add method of BlogRepository passing a new Blog instance that has a name that’s already exists in the fake in-memory collection which we set up at (1). As expected the test will pass because it will throw the expected exception defined on top of the test method - InvalidOperationException. TypeMock Isolator succeeded in faking Entity Framework with ease. Conclusion We explored how to write a simple unit test using TypeMock Isolator for code which is using Entity Framework. We also explored a few of the limitations of other mocking frameworks which TypeMock is successfully able to handle. There are workarounds that you can use to overcome limitations when using MoQ or Rhino Mock, however the workarounds will require you to write more code and your tests will likely be more complex. For a comparison between different mocking frameworks take a look at this document produced by TypeMock. You might also want to check out this open source project to compare mocking frameworks. I hope you enjoyed this post Muhammad Mosa http://mosesofegypt.net/ http://twitter.com/mosessaur Screencast of unit testing Entity Framework Related Links GuestPost: Introduction to Mocking GuesPost: Typemock Isolator – Much more than an Isolation framework

    Read the article

  • Advantages of SQL Backup Pro

    - by Grant Fritchey
    Getting backups of your databases in place is a fundamental issue for protection of the business. Yes, I said business, not data, not databases, but business. Because of a lack of good, tested, backups, companies have gone completely out of business or suffered traumatic financial loss. That’s just a simple fact (outlined with a few examples here). So you want to get backups right. That’s a big part of why we make Red Gate SQL Backup Pro work the way it does. Yes, you could just use native backups, but you’ll be missing a few advantages that we provide over and above what you get out of the box from Microsoft. Let’s talk about them. Guidance If you’re a hard-core DBA with 20+ years of experience on every version of SQL Server and several other data platforms besides, you may already know what you need in order to get a set of tested backups in place. But, if you’re not, maybe a little help would be a good thing. To set up backups for your servers, we supply a wizard that will step you through the entire process. It will also act to guide you down good paths. For example, if your databases are in Full Recovery, you should set up transaction log backups to run on a regular basis. When you choose a transaction log backup from the Backup Type you’ll see that only those databases that are in Full Recovery will be listed: This makes it very easy to be sure you have a log backup set up for all the databases you should and none of the databases where you won’t be able to. There are other examples of guidance throughout the product. If you have the responsibility of managing backups but very little knowledge or time, we can help you out. Throughout the software you’ll notice little green question marks. You can see two in the screen above and more in each of the screens in other topics below this one. Clicking on these will open a window with additional information about the topic in question which should help to guide you through some of the tougher decisions you may have to make while setting up your backup jobs. Here’s an example: Backup Copies As a part of the wizard you can choose to make a copy of your backup on your network. This process runs as part of the Red Gate SQL Backup engine. It will copy your backup, after completing the backup so it doesn’t cause any additional blocking or resource use within the backup process, to the network location you define. Creating a copy acts as a mechanism of protection for your backups. You can then backup that copy or do other things with it, all without affecting the original backup file. This requires either an additional backup or additional scripting to get it done within the native Microsoft backup engine. Offsite Storage Red Gate offers you the ability to immediately copy your backup to the cloud as a further, off-site, protection of your backups. It’s a service we provide and expose through the Backup wizard. Your backup will complete first, just like with the network backup copy, then an asynchronous process will copy that backup to cloud storage. Again, this is built right into the wizard or even the command line calls to SQL Backup, so it’s part a single process within your system. With native backup you would need to write additional scripts, possibly outside of T-SQL, to make this happen. Before you can use this with your backups you’ll need to do a little setup, but it’s built right into the product to get this done. You’ll be directed to the web site for our hosted storage where you can set up an account. Compression If you have SQL Server 2008 Enterprise, or you’re on SQL Server 2008R2 or greater and you have a Standard or Enterprise license, then you have backup compression. It’s built right in and works well. But, if you need even more compression then you might want to consider Red Gate SQL Backup Pro. We offer four levels of compression within the product. This means you can get a little compression faster, or you can just sacrifice some CPU time and get even more compression. You decide. For just a simple example I backed up AdventureWorks2012 using both methods of compression. The resulting file from native was 53mb. Our file was 33mb. That’s a file that is smaller by 38%, not a small number when we start talking gigabytes. We even provide guidance here to help you determine which level of compression would be right for you and your system: So for this test, if you wanted maximum compression with minimum CPU use you’d probably want to go with Level 2 which gets you almost as much compression as Level 3 but will use fewer resources. And that compression is still better than the native one by 10%. Restore Testing Backups are vital. But, a backup is just a file until you restore it. How do you know that you can restore that backup? Of course, you’ll use CHECKSUM to validate that what was read from disk during the backup process is what gets written to the backup file. You’ll also use VERIFYONLY to check that the backup header and the checksums on the backup file are valid. But, this doesn’t do a complete test of the backup. The only complete test is a restore. So, what you really need is a process that tests your backups. This is something you’ll have to schedule separately from your backups, but we provide a couple of mechanisms to help you out here. First, when you create a backup schedule, all done through our wizard which gives you as much guidance as you get when running backups, you get the option of creating a reminder to create a job to test your restores. You can enable this or disable it as you choose when creating your scheduled backups. Once you’re ready to schedule test restores for your databases, we have a wizard for this as well. After you choose the databases and restores you want to test, all configurable for automation, you get to decide if you’re going to restore to a specified copy or to the original database: If you’re doing your tests on a new server (probably the best choice) you can just overwrite the original database if it’s there. If not, you may want to create a new database each time you test your restores. Another part of validating your backups is ensuring that they can pass consistency checks. So we have DBCC built right into the process. You can even decide how you want DBCC run, which error messages to include, limit or add to the checks being run. With this you could offload some DBCC checks from your production system so that you only run the physical checks on your production box, but run the full check on this backup. That makes backup testing not just a general safety process, but a performance enhancer as well: Finally, assuming the tests pass, you can delete the database, leave it in place, or delete it regardless of the tests passing. All this is automated and scheduled through the SQL Agent job on your servers. Running your databases through this process will ensure that you don’t just have backups, but that you have tested backups. Single Point of Management If you have more than one server to maintain, getting backups setup could be a tedious process. But, with Red Gate SQL Backup Pro you can connect to multiple servers and then manage all your databases and all your servers backups from a single location. You’ll be able to see what is scheduled, what has run successfully and what has failed, all from a single interface without having to connect to different servers. Log Shipping Wizard If you want to set up log shipping as part of a disaster recovery process, it can frequently be a pain to get configured correctly. We supply a wizard that will walk you through every step of the process including setting up alerts so you’ll know should your log shipping fail. Summary You want to get your backups right. As outlined above, Red Gate SQL Backup Pro will absolutely help you there. We supply a number of processes and functionalities above and beyond what you get with SQL Server native. Plus, with our guidance, hints and reminders, you will get your backups set up in a way that protects your business.

    Read the article

  • Mr Flibble: As Seen Through a Lens, Darkly

    - by Phil Factor
    One of the rewarding things about getting involved with Simple-Talk has been in meeting and working with some pretty daunting talents. I’d like to say that Dom Reed’s talents are at the end of the visible spectrum, but then there is Richard, who pops up on national radio occasionally, presenting intellectual programs, Andrew, master of the ukulele, with his pioneering local history work, and Tony with marathon running and his past as a university lecturer. However, Dom, who is Red Gate’s head of creative design and who did the preliminary design work for Simple-Talk, has taken the art photography to an extreme that was impossible before Photoshop. He’s not the first person to take a photograph of himself every day for two years, but he is definitely the first to weave the results into a frightening narrative that veers from comedy to pathos, using all the arts of Photoshop to create a fictional character, Mr Flibble.   Have a look at some of the Flickr pages. Uncle Spike The B-Men – Woolverine The 2011 BoyZ iN Sink reunion tour turned out to be their last Error 404 – Flibble not found Mr Flibble is not a normal type of alter-ego. We generally prefer to choose bronze age warriors of impossibly magnificent physique and stamina; superheroes who bestride the world, scorning the forces of evil and anarchy in a series noble and righteous quests. Not so Dom, whose Mr Flibble is vulnerable, and laid low by an addiction to toxic substances. His work has gained an international cult following and is used as course material by several courses in photography. Although his work was for a while ignored by the more conventional world of ‘art’ photography they became famous through the internet. His photos have received well over a million views on Flickr. It was definitely time to turn this work into a book, because the whole sequence of images has its maximum effect when seen in sequence. He has a Kickstarter project page, one of the first following the recent UK launch of the crowdfunding platform. The publication of the book should be a major event and the £45 I shall divvy up will be one of the securest investments I shall ever make. The local news in Cambridge picked up on the project and I can quote from the report by the excellent Cabume website , the source of Tech news from the ‘Cambridge cluster’ Put really simply Mr Flibble likes to dress up and take pictures of himself. One of the benefits of a split personality, however is that Mr Flibble is supported in his endeavour by Reed’s top notch photography skills, supreme mastery of Photoshop and unflinching dedication to the cause. The duo have collaborated to take a picture every day for the past 730-plus days. It is not a big surprise that neither Mr Flibble nor Reed watches any TV: In addition to his full-time role at Cambridge software house,Red Gate Software as head of creativity and the two to five hours a day he spends taking the Mr Flibble shots, Reed also helps organise the . And now Reed is using Kickstarter to see if the world is ready for a Mr Flibble coffee table book. Judging by the early response it is. At the time of writing, just a few days after it went live, ‘I Drink Lead Paint: An absurd photography book by Mr Flibble’ had raised £1,545 of the £10,000 target it needs to raise by the Friday 30 November deadline from 37 backers. Following the standard Kickstarter template, Reed is offering a series of rewards based on the amount pledged, ranging from a Mr Flibble desktop wallpaper for pledges of £5 or more to a signed copy of the book for pledges of £45 or more, right up to a starring role in the book for £1,500. Mr Flibble is unquestionably one of the more deranged Kickstarter hopefuls, but don’t think for a second that he doesn’t have a firm grasp on the challenges he faces on the road to immortalisation on 150 gsm stock. Under the section ‘risks and challenges’ on his Kickstarter page his statement begins: “An angry horde of telepathic iguanas discover the world’s last remaining stock of vintage lead paint and hold me to ransom. Gosh how I love to guzzle lead paint. Anyway… faced with such brazen bravado, I cower at the thought of taking on their combined might and die a sad and lonely Flibble deprived of my one and only true liquid love.” At which point, Reed manages to wrestle away the keyboard, giving him the opportunity to present slightly more cogent analysis of the obstacles the project must still overcome. We asked Reed a few questions about Mr Flibble’s Kickstarter adventure and felt that his responses were worth publishing in full: Firstly, how did you manage it – holding down a full time job and also conceiving and executing these ideas on a daily basis? I employed a small team of ferocious gerbils to feed me ideas on a daily basis. Whilst most of their ideas were incomprehensibly rubbish and usually revolved around food, just occasionally they’d give me an idea like my B-Men series. As a backup plan though, I found that the best way to generate ideas was to actually start taking photos. If I were to stand in front of the camera, pull a silly face, place a vegetable on my head or something else equally stupid, the resulting photo of that would typically spark an idea when I came to look at it. Sitting around idly trying to think of an idea was doomed to result in no ideas. I admit that I really struggled with time. I’m proud that I never missed a day, but it was definitely hard when you were late from work, tired or doing something socially on the same day. I don’t watch TV, which I guess really helps, because I’d frequently be spending 2-5 hours taking and processing the photos every day. Are there any overlaps between software development and creative thinking? Software is an inherently creative business and the speed that it moves ensures you always have to find solutions to new things. Everyone in the team needs to be a problem solver. Has it helped me specifically with my photography? Probably. Working within teams that continually need to figure out new stuff keeps the brain feisty I suppose, and I guess I’m continually exposed to a lot of possible sources of inspiration. How specifically will this Kickstarter project allow you to test the commercial appeal of your work and do you plan to get the book into shops? It’s taken a while to be confident saying it, but I know that people like the work that I do. I’ve had well over a million views of my pictures, many humbling comments and I know I’ve garnered some loyal fans out there who anticipate my next photo. For me, this Kickstarter is about seeing if there’s worth to my work beyond just making people smile. In an online world where there’s an abundance of freely available content, can you hope to receive anything from what you do, or would people just move onto the next piece of content if you happen to ask for some support? A book has been the single-most requested thing that people have asked me to produce and it’s something that I feel would showcase my work well. It’s just hard to convince people in the publishing industry just now to take any kind of risk – they’ve been hit hard. If I can show that people would like my work enough to buy a book, then it sends a pretty clear picture that publishers might hear, or it gives me the confidence enough to invest in myself a bit more – hard to do when you’re riddled with self-doubt! I’d love to see my work in the shops, yes. I could see it being the thing that someone flips through idly as they’re Christmas shopping and recognizing that it’d be just the perfect gift for their difficult to buy for friend or relative. That said, working in the software industry means I’m clearly aware of how I could use technology to distribute my work, but I can’t deny that there’s something very appealing to having a physical thing to hold in your hands. If the project is successful is there a chance that it could become a full-time job? At the moment that seems like a distant dream, as should this be successful, there are many more steps I’d need to take to reach any kind of business viability. Kickstarter seems exactly that – a way for people to help kick start me into something that could take off. If people like my work and want me to succeed with it, then taking a look at my Kickstarter page (and hopefully pledging a bit of support) would make my elbows blush considerably. So there is is. An opportunity to open the wallet just a bit to ensure that one of the more unusual talents sees the light in the format it deserves.  

    Read the article

  • Can't add Fedora 14 to grub.

    - by Dananjaya
    Today I installed Fedora 14 in a different partition in the same hard drive as Ubuntu. At the Fedora 14 installation, I chose not to install Boot-loader in the MBR, and instead chose to install it in the Fedora partition itself, which is according to my HD layout /sda3. After the Fedora 14 installation I booted in to Ubuntu and ran sudo update-grub but 'grub.cfg' fails to add Fedora 14 in to the OS list. Here is the output of boot-info script. Boot Info Script 0.60 from 17 May 2011 ============================= Boot Info Summary: =============================== = Grub2 (v1.99) is installed in the MBR of /dev/sda and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks for (,msdos1)/boot/grub on this drive. sda1: __________________________________________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Ubuntu 11.04 Boot files: /boot/grub/grub.cfg /etc/fstab /boot/grub/core.img sda2: __________________________________________________________________________ File system: Extended Partition Boot sector type: Unknown Boot sector info: sda5: __________________________________________________________________________ File system: swap Boot sector type: - Boot sector info: sda3: __________________________________________________________________________ File system: ext4 Boot sector type: Grub Legacy Boot sector info: Grub Legacy (v0.97) is installed in the boot sector of sda3 and looks at sector 49897340 on boot drive #1 for the stage2 file. A stage2 file is at this location on /dev/sda. Stage2 looks on partition #3 for /grub/grub.conf. Operating System: Boot files: /grub/menu.lst /grub/grub.conf sda4: __________________________________________________________________________ File system: LVM2_member Boot sector type: - Boot sector info: ============================ Drive/Partition Info: ============================= Drive: sda _____________________________________________________________________ Disk /dev/sda: 40.0 GB, 40020664320 bytes 255 heads, 63 sectors/track, 4865 cylinders, total 78165360 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sda1 * 2,048 49,865,759 49,863,712 83 Linux /dev/sda2 74,866,686 78,163,967 3,297,282 5 Extended /dev/sda5 74,866,688 78,163,967 3,297,280 82 Linux swap / Solaris /dev/sda3 49,866,752 50,890,751 1,024,000 83 Linux /dev/sda4 50,890,752 74,864,639 23,973,888 8e Linux LVM "blkid" output: ________________________________________________________________ Device UUID TYPE LABEL /dev/sda1 03e2a8da-171f-49e9-b24d-434e66cd1140 ext4 /dev/sda3 dea81d77-a375-4d0e-954e-1829f6b91f10 ext4 /dev/sda4 mzVoj0-GHJu-DJr4-0G2Y-SzZ0-LTfW-F01yf9 LVM2_member /dev/sda5 3e89ba8e-7754-4ee4-aca1-e2a82bffb7a7 swap ================================ Mount points: ================================= Device Mount_Point Type Options /dev/sda1 / ext4 (rw,errors=remount-ro,user_xattr,commit=0) =========================== sda1/boot/grub/grub.cfg: =========================== -------------------------------------------------------------------------------- # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="2" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod vbe insmod vga insmod video_bochs insmod video_cirrus } insmod part_msdos insmod ext2 set root='(/dev/sda,msdos1)' search --no-floppy --fs-uuid --set=root 03e2a8da-171f-49e9-b24d-434e66cd1140 if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=1024x768 load_video insmod gfxterm fi terminal_output gfxterm insmod part_msdos insmod ext2 set root='(/dev/sda,msdos1)' search --no-floppy --fs-uuid --set=root 03e2a8da-171f-49e9-b24d-434e66cd1140 set locale_dir=($root)/boot/grub/locale set lang=en_US insmod gettext if [ "${recordfail}" = 1 ]; then set timeout=-1 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray if background_color 44,0,30; then clear fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### if [ ${recordfail} != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode if [ "$linux_gfx_mode" != "text" ]; then load_video; fi menuentry 'Ubuntu, with Linux 2.6.38-8-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail set gfxpayload=$linux_gfx_mode insmod part_msdos insmod ext2 set root='(/dev/sda,msdos1)' search --no-floppy --fs-uuid --set=root 03e2a8da-171f-49e9-b24d-434e66cd1140 linux /boot/vmlinuz-2.6.38-8-generic root=UUID=03e2a8da-171f-49e9-b24d-434e66cd1140 ro quiet splash vt.handoff=7 initrd /boot/initrd.img-2.6.38-8-generic } menuentry 'Ubuntu, with Linux 2.6.38-8-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail set gfxpayload=$linux_gfx_mode insmod part_msdos insmod ext2 set root='(/dev/sda,msdos1)' search --no-floppy --fs-uuid --set=root 03e2a8da-171f-49e9-b24d-434e66cd1140 echo 'Loading Linux 2.6.38-8-generic ...' linux /boot/vmlinuz-2.6.38-8-generic root=UUID=03e2a8da-171f-49e9-b24d-434e66cd1140 ro single echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-2.6.38-8-generic } submenu "Previous Linux versions" { menuentry 'Ubuntu, with Linux 2.6.35-28-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail set gfxpayload=$linux_gfx_mode insmod part_msdos insmod ext2 set root='(/dev/sda,msdos1)' search --no-floppy --fs-uuid --set=root 03e2a8da-171f-49e9-b24d-434e66cd1140 linux /boot/vmlinuz-2.6.35-28-generic root=UUID=03e2a8da-171f-49e9-b24d-434e66cd1140 ro quiet splash vt.handoff=7 initrd /boot/initrd.img-2.6.35-28-generic } menuentry 'Ubuntu, with Linux 2.6.35-28-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail set gfxpayload=$linux_gfx_mode insmod part_msdos insmod ext2 set root='(/dev/sda,msdos1)' search --no-floppy --fs-uuid --set=root 03e2a8da-171f-49e9-b24d-434e66cd1140 echo 'Loading Linux 2.6.35-28-generic ...' linux /boot/vmlinuz-2.6.35-28-generic root=UUID=03e2a8da-171f-49e9-b24d-434e66cd1140 ro single echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-2.6.35-28-generic } menuentry 'Ubuntu, with Linux 2.6.32-21-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail set gfxpayload=$linux_gfx_mode insmod part_msdos insmod ext2 set root='(/dev/sda,msdos1)' search --no-floppy --fs-uuid --set=root 03e2a8da-171f-49e9-b24d-434e66cd1140 linux /boot/vmlinuz-2.6.32-21-generic root=UUID=03e2a8da-171f-49e9-b24d-434e66cd1140 ro quiet splash vt.handoff=7 initrd /boot/initrd.img-2.6.32-21-generic } menuentry 'Ubuntu, with Linux 2.6.32-21-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail set gfxpayload=$linux_gfx_mode insmod part_msdos insmod ext2 set root='(/dev/sda,msdos1)' search --no-floppy --fs-uuid --set=root 03e2a8da-171f-49e9-b24d-434e66cd1140 echo 'Loading Linux 2.6.32-21-generic ...' linux /boot/vmlinuz-2.6.32-21-generic root=UUID=03e2a8da-171f-49e9-b24d-434e66cd1140 ro single echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-2.6.32-21-generic } } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod part_msdos insmod ext2 set root='(/dev/sda,msdos1)' search --no-floppy --fs-uuid --set=root 03e2a8da-171f-49e9-b24d-434e66cd1140 linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod part_msdos insmod ext2 set root='(/dev/sda,msdos1)' search --no-floppy --fs-uuid --set=root 03e2a8da-171f-49e9-b24d-434e66cd1140 linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### if [ "x${timeout}" != "x-1" ]; then if keystatus; then if keystatus --shift; then set timeout=-1 else set timeout=0 fi else if sleep --interruptible 3 ; then set timeout=0 fi fi fi ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### -------------------------------------------------------------------------------- =============================== sda1/etc/fstab: ================================ -------------------------------------------------------------------------------- # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5). # # proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda1 during installation # Commented out by Dropbox # UUID=03e2a8da-171f-49e9-b24d-434e66cd1140 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=3e89ba8e-7754-4ee4-aca1-e2a82bffb7a7 none swap sw 0 0 UUID=03e2a8da-171f-49e9-b24d-434e66cd1140 / ext4 errors=remount-ro,user_xattr 0 1 -------------------------------------------------------------------------------- =================== sda1: Location of files loaded by Grub: ==================== GiB - GB File Fragment(s) 0.065803528 = 0.070656000 boot/grub/core.img 1 21.263332367 = 22.831329280 boot/grub/grub.cfg 1 0.771381378 = 0.828264448 boot/initrd.img-2.6.31-wl 1 2.054199219 = 2.205679616 boot/initrd.img-2.6.32-21-generic 3 2.893260956 = 3.106615296 boot/initrd.img-2.6.35-28-generic 2 6.833232880 = 7.337127936 boot/initrd.img-2.6.38-8-generic 2 1.772453308 = 1.903157248 boot/vmlinuz-2.6.32-21-generic 2 2.068012238 = 2.220511232 boot/vmlinuz-2.6.35-28-generic 1 5.532531738 = 5.940510720 boot/vmlinuz-2.6.38-8-generic 1 6.833232880 = 7.337127936 initrd.img 2 2.893260956 = 3.106615296 initrd.img.old 2 5.532531738 = 5.940510720 vmlinuz 1 2.068012238 = 2.220511232 vmlinuz.old 1 ============================= sda3/grub/grub.conf: ============================= -------------------------------------------------------------------------------- # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file # NOTICE: You have a /boot partition. This means that # all kernel and initrd paths are relative to /boot/, eg. # root (hd0,2) # kernel /vmlinuz-version ro root=/dev/mapper/VolGroup-lv_root # initrd /initrd-[generic-]version.img #boot=/dev/sda3 default=0 timeout=0 splashimage=(hd0,2)/grub/splash.xpm.gz hiddenmenu title Fedora (2.6.35.6-45.fc14.i686) root (hd0,2) kernel /vmlinuz-2.6.35.6-45.fc14.i686 ro root=/dev/mapper/VolGroup-lv_root rd_LVM_LV=VolGroup/lv_root rd_LVM_LV=VolGroup/lv_swap rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rhgb quiet initrd /initramfs-2.6.35.6-45.fc14.i686.img -------------------------------------------------------------------------------- =================== sda3: Location of files loaded by Grub: ==================== GiB - GB File Fragment(s) 23.792903900 = 25.547436032 grub/grub.conf 1 23.792903900 = 25.547436032 grub/menu.lst 1 23.793020248 = 25.547560960 grub/stage2 1 23.817364693 = 25.573700608 initramfs-2.6.35.6-45.fc14.i686.img 2 23.787566185 = 25.541704704 initrd-plymouth.img 1 23.791228294 = 25.545636864 vmlinuz-2.6.35.6-45.fc14.i686 1 ======================== Unknown MBRs/Boot Sectors/etc: ======================== Unknown BootLoader on sda2 00000000 81 71 62 ff a1 94 89 ff 4d 43 3a ff fa f2 ec ff |.qb.....MC:.....| 00000010 fb f6 f1 ff fc f8 f4 ff fc f8 f4 ff fc f8 f4 ff |................| 00000020 5d 56 50 ff a1 94 89 ff 81 70 62 ff 81 70 62 ff |]VP......pb..pb.| 00000030 81 70 62 ff 81 70 62 ff 81 70 62 ff a1 94 89 ff |.pb..pb..pb.....| 00000040 4d 43 3a ff fa f2 ec ff fb f6 f1 ff fc f8 f4 ff |MC:.............| 00000050 fc f8 f4 ff fc f8 f4 ff 5d 56 50 ff a1 94 89 ff |........]VP.....| 00000060 81 70 62 ff 81 70 62 ff 81 70 62 ff 81 70 62 ff |.pb..pb..pb..pb.| 00000070 81 70 62 ff a1 94 89 ff 4d 43 3a ff fa f2 ec ff |.pb.....MC:.....| 00000080 fb f6 f1 ff fc f8 f4 ff fc f8 f4 ff fc f8 f4 ff |................| 00000090 5d 56 50 ff a0 93 89 ff 80 6f 61 ff 80 6f 61 ff |]VP......oa..oa.| 000000a0 80 6f 61 ff 80 6f 61 ff 80 6f 61 ff a0 93 89 ff |.oa..oa..oa.....| 000000b0 4d 43 3a ff fa f2 ed ff fb f6 f2 ff fc f8 f5 ff |MC:.............| 000000c0 fc f8 f5 ff fc f8 f5 ff 5d 56 50 ff 9f 93 88 ff |........]VP.....| 000000d0 7f 6f 60 ff 7f 6f 60 ff 7f 6f 60 ff 7f 6f 60 ff |.o`..o`..o`..o`.| 000000e0 7f 6f 60 ff 9f 93 88 ff 4d 43 3a ff fa f2 ed ff |.o`.....MC:.....| 000000f0 fb f6 f2 ff fc f8 f5 ff fc f8 f5 ff fc f8 f5 ff |................| 00000100 5d 56 50 ff 9f 93 88 ff 7f 6f 60 ff 7f 6f 60 ff |]VP......o`..o`.| 00000110 7f 6f 60 ff 7f 6f 60 ff 7f 6f 60 ff 9f 93 88 ff |.o`..o`..o`.....| 00000120 4d 43 3a ff fa f2 ed ff fb f6 f2 ff fc f8 f5 ff |MC:.............| 00000130 fc f8 f5 ff fc f8 f5 ff 5d 56 50 ff 9e 92 88 ff |........]VP.....| 00000140 7e 6e 60 ff 7e 6e 60 ff 7e 6e 60 ff 7e 6e 60 ff |~n`.~n`.~n`.~n`.| 00000150 7e 6e 60 ff 9e 92 88 ff 4d 43 3a ff fa f2 ed ff |~n`.....MC:.....| 00000160 fb f6 f2 ff fc f8 f5 ff fc f8 f5 ff fc f8 f5 ff |................| 00000170 5d 56 50 ff 9e 92 88 ff 7d 6d 5f ff 7d 6d 5f ff |]VP.....}m_.}m_.| 00000180 7d 6d 5f ff 7d 6d 5f ff 7d 6d 5f ff 9e 92 88 ff |}m_.}m_.}m_.....| 00000190 4d 43 3a ff fa f2 ed ff fb f6 f2 ff fc f8 f5 ff |MC:.............| 000001a0 fc f8 f5 ff fc f8 f5 ff 5d 56 50 ff 9e 92 88 ff |........]VP.....| 000001b0 7d 6d 5f ff 7d 6d 5f ff 7d 6d 5f ff 7d 6d 00 fe |}m_.}m_.}m_.}m..| 000001c0 ff ff 82 fe ff ff 02 00 00 00 00 50 32 00 00 00 |...........P2...| 000001d0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 000001f0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 55 aa |..............U.| 00000200 =============================== StdErr Messages: =============================== unlzma: Decoder error According to this Fedora 14 is visible in sda3. Does anybody know a way to add Fedora 14 to grub.cfg of Ubuntu so I can choose which OS to boot? Thanks in advance.

    Read the article

  • CodePlex Daily Summary for Tuesday, June 15, 2010

    CodePlex Daily Summary for Tuesday, June 15, 2010New ProjectsBackup on Build: Backup critical files on each Visual Studio build.CDN Support for EPiServer CMS: This module adds CDN support for EPiServer CMS by modifying outgoing links.Custom WCF Bindings: This project contains some custom WCF LOB SDK bindings I have created including a SalesForce one. I will blog on updates as they occur. Please se...DocIcon for SharePoint 2010: DocIcon for SharePoint re-enables links from document icons in SharePoint 2010. This feature was in previous versions of SharePoint, but was remove...EEG Peak Detection: EEG Peak Detectionemployeemanagement1: employeemanagement1Enables map services on top of existing map providers like Google Maps: Services include Map visualization services, Map decoration services, Spot registration services and Spot naming services.fbprivacy: Tool to assess your Facebook Privacy SettingsfMRI SVM Toolbox: fMRI SVM ToolboxGCMS – using .Net for human CMS: GCMS makes only what you need to do with a CMS and nothing more and it makes it with .NETqjblog: My First Blog.Send2Sharepoint: Office(Word,Excel,Outlook) and windows explorer addin to upload documents to sharepoint document library. SharePoint Find and Replace: SharePoint Find & Replace allows you to replace a specific string within a site collection with a different value. For example, when you change a l...SharePoint Management Studio: This project developed on Visula Studio 2008 and c# language. The main aim is manage your SharePoint 2007 FARM.SharePoint PageController: A SharePoint solution which provides an extensible framework to perform actions on a per-page basis in SharePoint. OOTB functionality allows for f...SilverNotePad: Simple notepad built using MVVM patern.SolidWorks Addin Development: The SolidWorks Addin Development project is dedicated to helping developers and non-developers with creating fully functional addins.Sunlit World Scheme: Sunlit World Scheme is a nearly R4RS-compliant Scheme implementation that supports threading, TCP, UDP, cryptography, and simple graphics and windo...TimeBend: Time tracking gone wild.TinyCMS: Jednostavan CMS s mogućnosti unosa vijesti, linkova i natječaja. CSS je napravljen tek toliko... Aplikacija izrađena za dev4Fun natjecanje.Ujimanet Android: text categorization tool for androidVisual Storm Engine: Visual Storm es un motor para probar nuevas tecnologias orientadas a la creacion de video juegos. Por ahora solo soporta Windows Vista/7 y usa Dire...New ReleasesAjax ASP.Net Forum: developer.insecla.com-forum_v0.1.4: *VERSION: 0.1.4* FEATURES ADDED Rating Threads (Through AjaxCTK (included Ms .DLL in the BIN folder)) Empowered Within AJAX Custom star ima...AlphaGet: Alpha 3: Important: from this release WinGet changes its name to AlphaGet in order to identify it better and make it search-engine friendly. New Features N...Backup on Build: Backup on Build v1.0.0 Initial Release: Initial Release version 1.0.0Boleto.Net: BoletoNet: Última versão estável da BoletoNet.dll. O código fonte dessa versão pode ser encontrado em http://boletonet.codeplex.com/SourceControl/list/change...CDN Support for EPiServer CMS: CDN Support v1: See links on start page for information on how to use and install this module.Chargify.NET: Chargify.NET v0.750: Adding support for creating freemium subscription plans Adding preliminary JSON support Adding ISO 3166-1 Alpha 2 data embedded in the library ...Community Forums NNTP bridge: Community Forums NNTP Bridge V38: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...ContainerOne - C# application server: V0.1.3.0: New minor release containing: Infrastructure - core service An installer of a windows service which provides the following: Service registry Even...DocIcon for SharePoint 2010: wwEsp.DocIcon Deployment Package Release 1.0.0: This package installs the DocIcon feature on a SharePoint 2010 server farm. The solution is deployed as a Farm-level feature that can be enabled or...Ethical Hacking ASP.NET: Version 1.2.0.0: For the complete list of changes, new features and fixes in the new version, please view the Version History page. Read more about the available te...Folder Bookmarks: Folder Bookmarks 1.6.3: The latest version of Folder Bookmarks (1.6.3), with new features and Mini-Menu UI Changes (1.4). Once you have extracted the file, do not delete ...Hades: Projet Hadès - Official Demo - Version 0.1.1 Beta: Second release correcting some bugs... ---------------------------------------------------------------------------- - Projet Hadès - Official Demo...KooBoo Image Gallery: RC 1: This new Version has this features 1) Refactoring to change the mispelled word galery to gallery 2) Change to use the plugin in the same page of ...LibWowArmory: LibWowArmory 0.3 beta: LibWowArmory 0.3 betaThis release of the LibWowArmory source code matches the WoW Armory as of version 3.3.3. Changes since version 0.2.3:Solution...MailChimp4Umbraco: 0.90 stable: Can be used in productionMapWindow6: MapWindow 6.0 June 14: This version adds the WebMercator projection and fixes a bug that was causing some perfect spheres to be created as oblate WGS1984 spheroids.MDownloader: MDownloader-0.15.18.59782: Supported FileServe. Supported SharingMatrix. Fixed minor bugs.MGM - MyGroupManager: MyGroupManager v0.1.5 - Alpha: At this point the application appears feature complete and works pretty well. The code still needs some tweaks (error handling), and a general look...MvcPager: MvcPager 1.4: MvcPager 1.4 source codes and demo projects MvcPager 1.4版源代码及示例文件Nito.LINQ: Beta (v0.6): Rx version The "with Rx" versions of Nito.LINQ are built against Rx 1.0.2563.0, released 2010-06-09. Supported Platforms .NET 4.0 Client Profile, ...open gaze and mouse analyzer: Ogama 3.3: This release was published on 14.06.2010 and is a bugfix release. For the list of changes please visit http://www.ogama.net. Only use this installe...patterns & practices: Prism: Prism 4.0 Drop 2: Prism 4.0 Drop 2 Welcome to the second drop of Prism 4.0 (formally known as the Composite Application Guidance for WPF and Silverlight). This drop ...Prism Software Factory Light: 0.5 Beta: 4ward Prism Software Factory Light - 0.5 Beta releaseThis is the first public beta release of the 4ward Prism Software Factory Light that allows to...PROGRAMMABLE SOFTWARE DEVELOPMENT ENVIRONMENT: PROGRAMMABLE SOFTWARE DEVELOPMENT ENVIRONMENT--3.3: Over the last several months, my primary research effort has been directed at producing strictly portable development methods between C and C# . T...qjblog: v1.source: v1.sourceqjblog: v1blog: V1 BlogQuick Performance Monitor: Version 1.4.2: Added 'Move to new window' functionality.Refix - .NET dependency management: Refix v0.1.0.90 ALPHA: Added console tree-style visualisation of solution dependencies, as well as some bug fixes. This version should work out of the box with the demons...SEMICO Framework: Version Stable 1.0.0.3: Version Stable 1.0.0.3SharePoint Find and Replace: 1.0.16: Version: 1.0.16 This release is the first stable release of this project, including the Microsoft public license agreement. Fixes: Added about dia...SharePoint Management Studio: v1: v1SharePoint PageController: SharePoint PageController: For SharePoint 2010 and 2007 running on IIS 7Software Is Hardwork: Sw. Is Hw. Lib. 3.0.0.x+06: Sw. Is Hw. Lib. 3.0.0.x+06SolidWorks Addin Development: GenericAddinFramework-06.14.2010: R1.SourceGrid: SourceGrid 4.30: Sources are here Note that SourceGrid sources are not hosted on CodePlex. The sources are hosted on bitbucket.org Main Changes Improved hidden ...SSIS Expression Editor & Tester: Expression Editor and Tester v1.0.2.0: Corrected release of expression editor tool, no changes to control. Download and extract the files to get started, no install required. Changes Co...Sunlit World Scheme: Sunlit World Scheme - 20100614 - source and binary: This is the result of building the current source code in Debug mode. The source code is included.TinyCMS: TinyCMS: Source kodVCC: Latest build, v2.1.30614.0: Automatic drop of latest buildVianaNET - Videoanalysis for physical motion: VianaNET 1.2 - beta: This is the VianaNET beta release with some bug fixes. Would like to have some comments on it. Regards, AdrianWorkLogger: Worklogger Beta 1: Simple work logger for Windows in WPFMost Popular ProjectsWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryPHPExcelMicrosoft SQL Server Community & SamplesASP.NETMost Active Projectspatterns & practices – Enterprise LibraryRhyduino - Arduino and Managed CodeCassandraemonCommunity Forums NNTP bridgedotSpatialjQuery Library for SharePoint Web ServicesBlogEngine.NETLightweight Fluent WorkflowNB_Store - Free DotNetNuke Ecommerce Catalog ModuleUmbraco CMS

    Read the article

  • Metro: Namespaces and Modules

    - by Stephen.Walther
    The goal of this blog entry is to describe how you can use the Windows JavaScript (WinJS) library to create namespaces. In particular, you learn how to use the WinJS.Namespace.define() and WinJS.Namespace.defineWithParent() methods. You also learn how to hide private methods by using the module pattern. Why Do We Need Namespaces? Before we do anything else, we should start by answering the question: Why do we need namespaces? What function do they serve? Do they just add needless complexity to our Metro applications? After all, plenty of JavaScript libraries do just fine without introducing support for namespaces. For example, jQuery has no support for namespaces and jQuery is the most popular JavaScript library in the universe. If jQuery can do without namespaces, why do we need to worry about namespaces at all? Namespaces perform two functions in a programming language. First, namespaces prevent naming collisions. In other words, namespaces enable you to create more than one object with the same name without conflict. For example, imagine that two companies – company A and company B – both want to make a JavaScript shopping cart control and both companies want to name the control ShoppingCart. By creating a CompanyA namespace and CompanyB namespace, both companies can create a ShoppingCart control: a CompanyA.ShoppingCart and a CompanyB.ShoppingCart control. The second function of a namespace is organization. Namespaces are used to group related functionality even when the functionality is defined in different physical files. For example, I know that all of the methods in the WinJS library related to working with classes can be found in the WinJS.Class namespace. Namespaces make it easier to understand the functionality available in a library. If you are building a simple JavaScript application then you won’t have much reason to care about namespaces. If you need to use multiple libraries written by different people then namespaces become very important. Using WinJS.Namespace.define() In the WinJS library, the most basic method of creating a namespace is to use the WinJS.Namespace.define() method. This method enables you to declare a namespace (of arbitrary depth). The WinJS.Namespace.define() method has the following parameters: · name – A string representing the name of the new namespace. You can add nested namespace by using dot notation · members – An optional collection of objects to add to the new namespace For example, the following code sample declares two new namespaces named CompanyA and CompanyB.Controls. Both namespaces contain a ShoppingCart object which has a checkout() method: // Create CompanyA namespace with ShoppingCart WinJS.Namespace.define("CompanyA"); CompanyA.ShoppingCart = { checkout: function (){ return "Checking out from A"; } }; // Create CompanyB.Controls namespace with ShoppingCart WinJS.Namespace.define( "CompanyB.Controls", { ShoppingCart: { checkout: function(){ return "Checking out from B"; } } } ); // Call CompanyA ShoppingCart checkout method console.log(CompanyA.ShoppingCart.checkout()); // Writes "Checking out from A" // Call CompanyB.Controls checkout method console.log(CompanyB.Controls.ShoppingCart.checkout()); // Writes "Checking out from B" In the code above, the CompanyA namespace is created by calling WinJS.Namespace.define(“CompanyA”). Next, the ShoppingCart is added to this namespace. The namespace is defined and an object is added to the namespace in separate lines of code. A different approach is taken in the case of the CompanyB.Controls namespace. The namespace is created and the ShoppingCart object is added to the namespace with the following single line of code: WinJS.Namespace.define( "CompanyB.Controls", { ShoppingCart: { checkout: function(){ return "Checking out from B"; } } } ); Notice that CompanyB.Controls is a nested namespace. The top level namespace CompanyB contains the namespace Controls. You can declare a nested namespace using dot notation and the WinJS library handles the details of creating one namespace within the other. After the namespaces have been defined, you can use either of the two shopping cart controls. You call CompanyA.ShoppingCart.checkout() or you can call CompanyB.Controls.ShoppingCart.checkout(). Using WinJS.Namespace.defineWithParent() The WinJS.Namespace.defineWithParent() method is similar to the WinJS.Namespace.define() method. Both methods enable you to define a new namespace. The difference is that the defineWithParent() method enables you to add a new namespace to an existing namespace. The WinJS.Namespace.defineWithParent() method has the following parameters: · parentNamespace – An object which represents a parent namespace · name – A string representing the new namespace to add to the parent namespace · members – An optional collection of objects to add to the new namespace The following code sample demonstrates how you can create a root namespace named CompanyA and add a Controls child namespace to the CompanyA parent namespace: WinJS.Namespace.define("CompanyA"); WinJS.Namespace.defineWithParent(CompanyA, "Controls", { ShoppingCart: { checkout: function () { return "Checking out"; } } } ); console.log(CompanyA.Controls.ShoppingCart.checkout()); // Writes "Checking out" One significant advantage of using the defineWithParent() method over the define() method is the defineWithParent() method is strongly-typed. In other words, you use an object to represent the base namespace instead of a string. If you misspell the name of the object (CompnyA) then you get a runtime error. Using the Module Pattern When you are building a JavaScript library, you want to be able to create both public and private methods. Some methods, the public methods, are intended to be used by consumers of your JavaScript library. The public methods act as your library’s public API. Other methods, the private methods, are not intended for public consumption. Instead, these methods are internal methods required to get the library to function. You don’t want people calling these internal methods because you might need to change them in the future. JavaScript does not support access modifiers. You can’t mark an object or method as public or private. Anyone gets to call any method and anyone gets to interact with any object. The only mechanism for encapsulating (hiding) methods and objects in JavaScript is to take advantage of functions. In JavaScript, a function determines variable scope. A JavaScript variable either has global scope – it is available everywhere – or it has function scope – it is available only within a function. If you want to hide an object or method then you need to place it within a function. For example, the following code contains a function named doSomething() which contains a nested function named doSomethingElse(): function doSomething() { console.log("doSomething"); function doSomethingElse() { console.log("doSomethingElse"); } } doSomething(); // Writes "doSomething" doSomethingElse(); // Throws ReferenceError You can call doSomethingElse() only within the doSomething() function. The doSomethingElse() function is encapsulated in the doSomething() function. The WinJS library takes advantage of function encapsulation to hide all of its internal methods. All of the WinJS methods are defined within self-executing anonymous functions. Everything is hidden by default. Public methods are exposed by explicitly adding the public methods to namespaces defined in the global scope. Imagine, for example, that I want a small library of utility methods. I want to create a method for calculating sales tax and a method for calculating the expected ship date of a product. The following library encapsulates the implementation of my library in a self-executing anonymous function: (function (global) { // Public method which calculates tax function calculateTax(price) { return calculateFederalTax(price) + calculateStateTax(price); } // Private method for calculating state tax function calculateStateTax(price) { return price * 0.08; } // Private method for calculating federal tax function calculateFederalTax(price) { return price * 0.02; } // Public method which returns the expected ship date function calculateShipDate(currentDate) { currentDate.setDate(currentDate.getDate() + 4); return currentDate; } // Export public methods WinJS.Namespace.define("CompanyA.Utilities", { calculateTax: calculateTax, calculateShipDate: calculateShipDate } ); })(this); // Show expected ship date var shipDate = CompanyA.Utilities.calculateShipDate(new Date()); console.log(shipDate); // Show price + tax var price = 12.33; var tax = CompanyA.Utilities.calculateTax(price); console.log(price + tax); In the code above, the self-executing anonymous function contains four functions: calculateTax(), calculateStateTax(), calculateFederalTax(), and calculateShipDate(). The following statement is used to expose only the calcuateTax() and the calculateShipDate() functions: // Export public methods WinJS.Namespace.define("CompanyA.Utilities", { calculateTax: calculateTax, calculateShipDate: calculateShipDate } ); Because the calculateTax() and calcuateShipDate() functions are added to the CompanyA.Utilities namespace, you can call these two methods outside of the self-executing function. These are the public methods of your library which form the public API. The calculateStateTax() and calculateFederalTax() methods, on the other hand, are forever hidden within the black hole of the self-executing function. These methods are encapsulated and can never be called outside of scope of the self-executing function. These are the internal methods of your library. Summary The goal of this blog entry was to describe why and how you use namespaces with the WinJS library. You learned how to define namespaces using both the WinJS.Namespace.define() and WinJS.Namespace.defineWithParent() methods. We also discussed how to hide private members and expose public members using the module pattern.

    Read the article

< Previous Page | 603 604 605 606 607 608 609 610 611 612 613 614  | Next Page >