Search Results

Search found 34971 results on 1399 pages for 'st even'.

Page 57/1399 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • MySQL Tables Missing/Corrupt After Recreation

    - by Synetech inc.
    Hi, Yesterday I dumped my MySQL databases to an SQL file and renamed the ibdata1 file. I then recreated it and imported the SQL file and moved the new ibdata1 file to my MySQL data directory, deleting the old one. I’ve done it before without issue, however this time something is not right. When I examine the (personal, not MySQL config) databases, they are all there, but they are empty… sort of. The data directory still has the .ibd files with the correct content in them and I can view the table list in the databases, but not the tables themselves. (I have file-per-table enabled, and am using InnoDB as default for everything.) For example with the urls database and its urls table, I can successfully open mysql.exe or phpMyAdmin and use urls;. I can even show tables; to see the expected table, but then when I try to describe urls; or select * from urls;, it complains that the table does not exist (even though it just listed it). (The MySQL Administrator lists the databases, but does not even list the tables, it indicates that the dbs are completely empty.) The problem now is that I have already deleted the SQL file (and cannot recover it even after scouring my hard-drive). So I am trying to figure out a way to repair these databases/tables. I can’t use the table repair function since it complains that the table does not exist, and I can’t dump them because again, it complains that the tables don’t exist. Like I’ve said, the data itself is still present in the .ibd files and the table names are present. I just need a way to get MySQL to recognize that the tables exist in the databases (I can find the column names of the tables in question in the ibdata1 file using a hex-editor). Any idea how I can repair this type of corruption? I don’t mind rolling up my sleeves, digging in, and taking a bunch of steps to fix it. Thanks a lot.

    Read the article

  • 10 Essential Tools for building ASP.NET Websites

    - by Stephen Walther
    I recently put together a simple public website created with ASP.NET for my company at Superexpert.com. I was surprised by the number of free tools that I ended up using to put together the website. Therefore, I thought it would be interesting to create a list of essential tools for building ASP.NET websites. These tools work equally well with both ASP.NET Web Forms and ASP.NET MVC. Performance Tools After reading Steve Souders two (very excellent) books on front-end website performance High Performance Web Sites and Even Faster Web Sites, I have been super sensitive to front-end website performance. According to Souders’ Performance Golden Rule: “Optimize front-end performance first, that's where 80% or more of the end-user response time is spent” You can use the tools below to reduce the size of the images, JavaScript files, and CSS files used by an ASP.NET application. 1. Sprite and Image Optimization Framework CSS sprites were first described in an article written for A List Apart entitled CSS sprites: Image Slicing’s Kiss of Death. When you use sprites, you combine multiple images used by a website into a single image. Next, you use CSS trickery to display particular sub-images from the combined image in a webpage. The primary advantage of sprites is that they reduce the number of requests required to display a webpage. Requesting a single large image is faster than requesting multiple small images. In general, the more resources – images, JavaScript files, CSS files – that must be moved across the wire, the slower your website. However, most people avoid using sprites because they require a lot of work. You need to combine all of the images and write just the right CSS rules to display the sub-images. The Microsoft Sprite and Image Optimization Framework enables you to avoid all of this work. The framework combines the images for you automatically. Furthermore, the framework includes an ASP.NET Web Forms control and an ASP.NET MVC helper that makes it easy to display the sub-images. You can download the Sprite and Image Optimization Framework from CodePlex at http://aspnet.codeplex.com/releases/view/50869. The Sprite and Image Optimization Framework was written by Morgan McClean who worked in the office next to mine at Microsoft. Morgan was a scary smart Intern from Canada and we discussed the Framework while he was building it (I was really excited to learn that he was working on it). Morgan added some great advanced features to this framework. For example, the Sprite and Image Optimization Framework supports something called image inlining. When you use image inlining, the actual image is stored in the CSS file. Here’s an example of what image inlining looks like: .Home_StephenWalther_small-jpg { width:75px; height:100px; background: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAEsAAABkCAIAAABB1lpeAAAAB GdBTUEAALGOfPtRkwAAACBjSFJNAACHDwAAjA8AAP1SAACBQAAAfXkAAOmLAAA85QAAGcxzPIV3AAAKL s+zNfREAAAAASUVORK5CYII=) no-repeat 0% 0%; } The actual image (in this case a picture of me that is displayed on the home page of the Superexpert.com website) is stored in the CSS file. If you visit the Superexpert.com website then very few separate images are downloaded. For example, all of the images with a red border in the screenshot below take advantage of CSS sprites: Unfortunately, there are some significant Gotchas that you need to be aware of when using the Sprite and Image Optimization Framework. There are workarounds for these Gotchas. I plan to write about these Gotchas and workarounds in a future blog entry. 2. Microsoft Ajax Minifier Whenever possible you should combine, minify, compress, and cache with a far future header all of your JavaScript and CSS files. The Microsoft Ajax Minifier makes it easy to minify JavaScript and CSS files. Don’t confuse minification and compression. You need to do both. According to Souders, you can reduce the size of a JavaScript file by an additional 20% (on average) by minifying a JavaScript file after you compress the file. When you minify a JavaScript or CSS file, you use various tricks to reduce the size of the file before you compress the file. For example, you can minify a JavaScript file by replacing long JavaScript variables names with short variables names and removing unnecessary white space and comments. You can minify a CSS file by doing such things as replacing long color names such as #ffffff with shorter equivalents such as #fff. The Microsoft Ajax Minifier was created by Microsoft employee Ron Logan. Internally, this tool was being used by several large Microsoft websites. We also used the tool heavily on the ASP.NET team. I convinced Ron to publish the tool on CodePlex so that everyone in the world could take advantage of it. You can download the tool from the ASP.NET Ajax website and read documentation for the tool here. I created the installer for the Microsoft Ajax Minifier. When creating the installer, I also created a Visual Studio build task to make it easy to minify all of your JavaScript and CSS files whenever you do a build within Visual Studio automatically. Read the Ajax Minifier Quick Start to learn how to configure the build task. 3. ySlow The ySlow tool is a free add-on for Firefox created by Yahoo that enables you to test the front-end of your website. For example, here are the current test results for the Superexpert.com website: The Superexpert.com website has an overall score of B (not perfect but not bad). The ySlow tool is not perfect. For example, the Superexpert.com website received a failing grade of F for not using a Content Delivery Network even though the website using the Microsoft Ajax Content Delivery Network for JavaScript files such as jQuery. Uptime After publishing a website live to the world, you want to ensure that the website does not encounter any issues and that it stays live. I use the following tools to monitor the Superexpert.com website now that it is live. 4. ELMAH ELMAH stands for Error Logging Modules and Handlers for ASP.NET. ELMAH enables you to record any errors that happen at your website so you can review them in the future. You can download ELMAH for free from the ELMAH project website. ELMAH works great with both ASP.NET Web Forms and ASP.NET MVC. You can configure ELMAH to store errors in a number of different stores including XML files, the Event Log, an Access database, a SQL database, an Oracle database, or in computer RAM. You also can configure ELMAH to email error messages to you when they happen. By default, you can access ELMAH by requesting the elmah.axd page from a website with ELMAH installed. Here’s what the elmah page looks like from the Superexpert.com website (this page is password-protected because secret information can be revealed in an error message): If you click on a particular error message, you can view the original Yellow Screen ASP.NET error message (even when the error message was never displayed to the actual user). I installed ELMAH by taking advantage of the new package manager for ASP.NET named NuGet (originally named NuPack). You can read the details about NuGet in the following blog entry by Scott Guthrie. You can download NuGet from CodePlex. 5. Pingdom I use Pingdom to verify that the Superexpert.com website is always up. You can sign up for Pingdom by visiting Pingdom.com. You can use Pingdom to monitor a single website for free. At the Pingdom website, you configure the frequency that your website gets pinged. I verify that the Superexpert.com website is up every 5 minutes. I have the Pingdom service verify that it can retrieve the string “Contact Us” from the website homepage. If your website goes down, you can configure Pingdom so that it sends an email, Twitter, SMS, or iPhone alert. I use the Pingdom iPhone app which looks like this: 6. Host Tracker If your website does go down then you need some way of determining whether it is a problem with your local network or if your website is down for everyone. I use a website named Host-Tracker.com to check how badly a website is down. Here’s what the Host-Tracker website displays for the Superexpert.com website when the website can be successfully pinged from everywhere in the world: Notice that Host-Tracker pinged the Superexpert.com website from 68 locations including Roubaix, France and Scranton, PA. Debugging I mean debugging in the broadest possible sense. I use the following tools when building a website to verify that I have not made a mistake. 7. HTML Spell Checker Why doesn’t Visual Studio have a built-in spell checker? Don’t know – I’ve always found this mysterious. Fortunately, however, a former member of the ASP.NET team wrote a free spell checker that you can use with your ASP.NET pages. I find a spell checker indispensible. It is easy to delude yourself that you are capable of perfect spelling. I’m always super embarrassed when I actually run the spell checking tool and discover all of my spelling mistakes. The fastest way to add the HTML Spell Checker extension to Visual Studio is to select the menu option Tools, Extension Manager within Visual Studio. Click on Online Gallery and search for HTML Spell Checker: 8. IIS SEO Toolkit If people cannot find your website through Google then you should not even bother to create it. Microsoft has a great extension for IIS named the IIS Search Engine Optimization Toolkit that you can use to identify issue with your website that would hurt its page rank. You also can use this tool to quickly create a sitemap for your website that you can submit to Google or Bing. You can even generate the sitemap for an ASP.NET MVC website. Here’s what the report overview for the Superexpert.com website looks like: Notice that the Sueprexpert.com website had plenty of violations. For example, there are 65 cases in which a page has a broken hyperlink. You can drill into these violations to identity the exact page and location where these violations occur. 9. LinqPad If your ASP.NET website accesses a database then you should be using LINQ to Entities with the Entity Framework. Using LINQ involves some magic. LINQ queries written in C# get converted into SQL queries for you. If you are not careful about how you write your LINQ queries, you could unintentionally build a really badly performing website. LinqPad is a free tool that enables you to experiment with your LINQ queries. It even works with Microsoft SQL CE 4 and Azure. You can use LinqPad to execute a LINQ to Entities query and see the results. You also can use it to see the resulting SQL that gets executed against the database: 10. .NET Reflector I use .NET Reflector daily. The .NET Reflector tool enables you to take any assembly and disassemble the assembly into C# or VB.NET code. You can use .NET Reflector to see the “Source Code” of an assembly even when you do not have the actual source code. You can download a free version of .NET Reflector from the Redgate website. I use .NET Reflector primarily to help me understand what code is doing internally. For example, I used .NET Reflector with the Sprite and Image Optimization Framework to better understand how the MVC Image helper works. Here’s part of the disassembled code from the Image helper class: Summary In this blog entry, I’ve discussed several of the tools that I used to create the Superexpert.com website. These are tools that I use to improve the performance, improve the SEO, verify the uptime, or debug the Superexpert.com website. All of the tools discussed in this blog entry are free. Furthermore, all of these tools work with both ASP.NET Web Forms and ASP.NET MVC. Let me know if there are any tools that you use daily when building ASP.NET websites.

    Read the article

  • How to Use An Antivirus Boot Disc or USB Drive to Ensure Your Computer is Clean

    - by Chris Hoffman
    If your computer is infected with malware, running an antivirus within Windows may not be enough to remove it. If your computer has a rootkit, the malware may be able to hide itself from your antivirus software. This is where bootable antivirus solutions come in. They can clean malware from outside the infected Windows system, so the malware won’t be running and interfering with the clean-up process. The Problem With Cleaning Up Malware From Within Windows Standard antivirus software runs within Windows. If your computer is infected with malware, the antivirus software will have to do battle with the malware. Antivirus software will try to stop the malware and remove it, while the malware will attempt to defend itself and shut down the antivirus. For really nasty malware, your antivirus software may not be able to fully remove it from within Windows. Rootkits, a type of malware that hides itself, can be even trickier. A rootkit could load at boot time before other Windows components and prevent Windows from seeing it, hide its processes from the task manager, and even trick antivirus applications into believing that the rootkit isn’t running. The problem here is that the malware and antivirus are both running on the computer at the same time. The antivirus is attempting to fight the malware on its home turf — the malware can put up a fight. Why You Should Use an Antivirus Boot Disc Antivirus boot discs deal with this by approaching the malware from outside Windows. You boot your computer from a CD or USB drive containing the antivirus and it loads a specialized operating system from the disc. Even if your Windows installation is completely infected with malware, the special operating system won’t have any malware running within it. This means the antivirus program can work on the Windows installation from outside it. The malware won’t be running while the antivirus tries to remove it, so the antivirus can methodically locate and remove the harmful software without it interfering. Any rootkits won’t be able to set up the tricks they use at Windows boot time to hide themselves from the rest o the operating system. The antivirus will be able to see the rootkits and remove them. These tools are often referred to as “rescue disks.” They’re meant to be used when you need to rescue a hopelessly infected system. Bootable Antivirus Options As with any type of antivirus software, you have quite a few options. Many antivirus companies offer bootable antivirus systems based on their antivirus software. These tools are generally free, even when they’re offered by companies that specialized in paid antivirus solutions. Here are a few good options: avast! Rescue Disk – We like avast! for offering a capable free antivirus with good detection rates in independent tests. avast! now offers the ability to create an antivirus boot disc or USB drive. Just navigate to the Tools -> Rescue Disk option in the avast! desktop application to create bootable media. BitDefender Rescue CD – BitDefender always seems to receive good scores in independent tests, and the BitDefender Rescue CD offers the same antivirus engine in the form of a bootable disc. Kaspersky Rescue Disk – Kaspersky also receives good scores in independent tests and offers its own antivirus boot disc. These are just a handful of options. If you prefer another antivirus for some reason — Comodo, Norton, Avira, ESET, or almost any other antivirus product — you’ll probably find that it offers its own system rescue disk. How to Use an Antivirus Boot Disc Using an antivirus boot disc or USB drive is actually pretty simple. You’ll just need to find the antivirus boot disc you want to use and burn it to disc or install it on a USB drive. You can do this part on any computer, so you can create antivirus boot media on a clean computer and then take it to an infected computer. Insert the boot media into the infected computer and then reboot. The computer should boot from the removable media and load the secure antivirus environment. (If it doesn’t, you may need to change the boot order in your BIOS or UEFI firmware.) You can then follow the instructions on your screen to scan your Windows system for malware and remove it. No malware will be running in the background while you do this. Antivirus boot discs are useful because they allow you to detect and clean malware infections from outside an infected operating system. If the operating system is severely infected, it may not be possible to remove — or even detect — all the malware from within it. Image Credit: aussiegall on Flickr     

    Read the article

  • Nerdstock 2012: A photo review of Microsoft TechEd North America 2012

    - by The Un-T Guy
    Not only could I not fathom that I would ever be attending a tech event of the magnitude of TechEd, neither could any of my co-workers.  As the least technical person in the history of Information Technology ever, I felt as though I were walking into the belly of the beast, fearing I’d not be allowed out until I could write SSIS packages, program in Visual Basic, or at least arm wrestle a DBA.  Most of my fears were unrealized.   But I made it.  I was here.  I even got to wear the Mark of the Geek neck package with schedule, eyeglass cleaners, name badge (company name obfuscated so they don’t fire me), and a pen.  The name  badge was seemingly the key element, as every vendor in the place wanted to scan it to capture name, email address, and numbers to show their bosses back home.  It also let me eat the food and drink the coffee so that’s a fair trade.   A recurring theme throughout the presentations and vendor demos was “the Cloud” and BYOD (bring your own device).  The below was a common site throughout the week, as attendees from all over the world brought their own devices and were able to (seemingly) seamlessly connect to the Worldwide Innerwebs.  Apparently proof that Microsoft and the event organizers were practicing what they were preaching.   “Cavernous” is one way to describe the downstairs facility itself.  “Freaking cavernous” might be more accurate.  Work sessions were held in classrooms on the second and third floors but the real action was happening downstairs.  Microsoft bookstore, blogger hub (shoutout to Geekswithblogs.net), The Wall (sans Pink Floyd, sadly), couches, recharging stations…   …a game zone with pool and air hockey tables, pinball machines, foosball…   …vintage video games…           …and a even giant chess board.  Looked like this guy was opening with the Kaspersky parry.   The blend of technology and fantasy even went so far as to bring childhood favorites to life.  Assuming, of course, your childhood was pre-video games (like mine) and you were stuck with electric football and Rock ‘em Sock ‘em robots:   And, lest the “combatants” become unruly or – God forbid – afternoon snacks were late, Orange County’s finest was on the scene to keep the peace.  On a high-tech mode of transport, of course.   She wasn’t the only one to think this was a swell way to transition from one concourse to the next.  Given the level of support provided by the entire Orange County Convention Center staff, I knew they had to have some secret.   Here’s one entrance to the vendor zone/”Technical Learning Center.”  Couldn’t help but think of them as the remora attached to the Whale Shark that is Microsoft…   …or perhaps planets orbiting the sun. Microsoft is just that huge and it seemed like every vendor in the industry looks forward to partnering with the tech behemoth.   Aside from the free stuff from the vendors, probably the most popular place in the house was the dining area.  Amazing spreads every day, multiple times a day.  While no attendance numbers were available at press time, literally thousands of attendees were fed, and fed well, every day.  And lest you think my post from earlier in the week exaggerated about the backpacks…   …or that I’m exaggerating about the lunch crowds.  This represents only about between 25-30% of the lunch crowd – it was all my camera could capture at once.  No one went away hungry.   The only thing missing was a a vat of Red Bull but apparently organizers went old school, with probably 100 urns of the original energy drink – coffee – all around the venue.   Of course, following lunch and afternoon sessions, some preferred the even older school method of re-energizing.  There were rumors that Microsoft was serving graham crackers and milk in this area.  But they were only rumors.   Cannot overstate the wonderful service provided by the Orange County Convention Center staff.  Coffee, soft drinks, juice, and water were available always.  Buffet meals were delicious with a wide range of healthy options available, in addition to hundreds (at least) special meal requests supported every day.  Ever tried to keep up with an estimated 9,000 hungry and thirsty IT-ers?  These folks did.  Kudos to all of the staff and many thanks!   And while I occasionally poke fun at the Whale Shark, if nothing else this experience convinced me of one thing:  Microsoft knows how to put on a professional event.  Hundreds of informative, professionally delivered sessions, covering a wide range of topics set at varying levels of expertise (some that even I was able to follow), social activities, vendor partnerships…they brought everything you could ask for to inform, educate, and inspire an entire IT industry.   So as I depart the belly of the beast, I can both take pride in the fact that I survived the week and marvel at the brilliance surrounding me.  The IT industry – or at least the segment associated with Microsoft – is in good, professional hands.  And what won’t fit in their hands can be toted in the Microsoft provided backpacks.  Win-win.   Until New Orleans…

    Read the article

  • Edit ePub eBooks with Your Favorite HTML Editor

    - by Matthew Guay
    ePub eBooks are increasingly popular today, but often they’ve been made by converting other file formats. Here’s how you can edit ePub books to remove irregularities and make them better for reading on your devices. ePub’s are actually a zip file containing images, XHTML files with your text, and more with the .epub extension. You can make them better by editing the XHTML files directly.  Code gurus can edit the code directly, but even if you’ve never edited HTML, you can still quickly make changes with a WYSIWYG editor. Extract the Files from your ePub eBook As mentioned before, ePub files are actually renamed zip files.  So first let’s get all of the files in your ePub eBook accessible.  Find an eBook you want to edit and then change the file extension to .zip. If you don’t see the file extensions, click Organize in the menu bar and select Folder and search options. Select the View tab, and then uncheck the box beside Hide extensions for known file types.  Click Ok, and then change the file type as above. Windows will warn you about changing the file type; click Yes to proceed. Now you can browse the files of the ePub file.  Notice that it contains mostly HTML or XHTML files and images.  Click Extract all files to save them all in a folder so you can easily edit them. Alternately, you can open the ePub file directly in your favorite file archival program such as 7-zip.  Browse to the location of your ePub file, double-click it, and it’ll automatically open even if you don’t change the file extension to zip.  Now you can extract the folder, or extract individual files as before.   Edit Your eBook in KompoZer The actual ebook contents are stored in HTML or XHTML files.  These may be stored on the top folder of you ePub file’s directory, or they may be stored in \OEBPS\text in the file. To change the contents of your eBook, you’ll want to edit these files.  Often there may be separate files for each chapter, so you may have to use trial and error to find the one you need to edit.  You could edit them by hand in Windows using Notepad if you don’t have an HTML editor installed. A better option would be to use an HTML editor.  Here we’ll use the free KompoZer program to edit the files just like we’d edit a document in Word. Download KompoZer (link below), and unzip the files.  Then open the new folder and launch kompozer.exe; you don’t even need to install it.  In fact, you could even store KompoZer on a flash drive so you could edit HTML files from any computer. In KompoZer, open the HTML or XHTML file from your eBook that you want to edit. Now you can edit the file just like you would edit a document in Word.  Remove extra and unneeded text, make titles stand out, correct misspellings … anything you want!  This is especially helpful if your ePub file was created by converting a PDF as these often have many small errors. Or, if you’d rather edit the code itself, select the Source tab and edit as you wish. When you’re done making the changes, make sure to save the file in the same location with the same file name. Recreate Your Edited ePub eBook Once you’ve made all the changes you wanted, it’s time to turn this folder of files back into ePub.  Make sure you change the name of the folder if it still has the same name as the original ePub or zip file so you don’t mix them up or have trouble with overwriting the old files. Zip the folder using Windows Explorer or your favorite archival utility.  If you are using another archival program, make sure to compress it as a zip folder; other compression methods will render the ePub unreadable by your eReader app. Now change the file extension again, this time back to .epub. Now you can read your eBook with your changes in your favorite reader program or app on your mobile device. Conclusion Whether you need to remove an odd, misplaced character or need to do fine editing, using an HTML editor is a great way to make your ePub eBooks look just like you want.  Also, with an editor like KompoZer it’s not even difficult. Download KompoZer Similar Articles Productive Geek Tips Change the Default Editor From Nano on Ubuntu LinuxConvert a PDF eBook to ePub Format for Your iPad, iPhone, or eReaderRead Mobi eBooks on Kindle for PCEdit Your Firefox Bookmarks Easier with Flat Bookmark EditorChange the Default Editor for Batch Files in Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server XPS file format & XPS Viewer Explained Microsoft Office Web Apps Guide Know if Someone Accessed Your Facebook Account Shop for Music with Windows Media Player 12 Access Free Documentaries at BBC Documentaries Rent Cameras In Bulk At CameraRenter

    Read the article

  • SQLAuthority News – Technology and Online Learning – Personal Technology Tip

    - by pinaldave
    This is the fourth post in my series about Personal Technology Tips and Tricks, and I knew exactly what I wanted to write about.  But at first I was conflicted.   Is online learning really a personal tip?  Is it really a trick that no one knows?  However, I have decided to stick with my original idea because online learning is everywhere.  It’s a trick that we can’t – and shouldn’t – overlook.  Here are ten of my ideas about how we should be taking advantage of online learning. 1) Get ahead in the work place.  We all know that a good way to become better at your job, and to become more competitive for promotions and raises.  Many people overlook online learning as a way to get job training, though, thinking it is a path for people still seeking their high school or college diplomas.  But take a look at what companies like Pluralsight offer, and you might be pleasantly surprised. 2) Flexibility.  Some of us remember the heady days of college with nostalgia, others remember it with loathing.  A lot of bad memories come from remembering the strict scheduling and deadlines of college.  But with online learning, the classes fit into your free time – you don’t have to schedule your life around classes.  Even better, there are usually no homework or test deadlines, only one final deadline where all work must be completed.  This allows students to work at their own pace – my next point. 3) Learn at your own pace.  One thing traditional classes suffer from is that they are highly structured.  If you work more quickly than the rest of the class, or especially if you work more slowly, traditional classes do not work for you.  Online courses let you move as quickly or as slowly as you find necessary. 4) Fill gaps in your knowledge.  I’m sure I am not the only one who has thought to myself “I would love to take a course on X, Y, or Z.”  The problem is that it can be very hard to find the perfect class that teaches exactly what you’re interested in, at a time and a price that’s right.  But online courses are far easier to tailor exactly to your tastes. 5) Fits into your schedule.  Even harder to find than a class you’re interested in is one that fits into your schedule.  If you hold down a job – even a part time job – you know it’s next to impossible to find class times that work for you.  Online classes can be taken anytime, anywhere.  On your lunch break, in your car, or in your pajamas at the end of the day. 6) Student centered.  Online learning has to stay competitive.  There are hundreds, even thousands of options for students, and every provider has to find a way to lure in students and provide them with a good education.  The best kind of online classes know that they need to provide great classes, flexible scheduling, and high quality to attract students – and the student benefit from this kind of attention. 7) You can save money.  The average cost for a college diploma in the US is over $20,000.  I don’t know about you, but that is not the kind of money I just have lying around for a rainy day.  Sometimes I think I’d love to go back to school, but not for that price tag.  Online courses are much, much more affordable.  And even better, you can pick and choose what courses you’d like to take, and avoid all the “electives” in college. 8) Get access to the best minds in the business.  One of the perks of being the best in your field is that you are one person who knows the most about something.  If students are lucky, you will choose to share that knowledge with them on a college campus.  For the hundreds of other students who don’t live in your area and don’t attend your school, they are out of luck.  But luckily for them, more and more online courses is attracting the best minds in the business, and if you enroll online, you can take advantage of these minds, too. 9) Save your time.  Getting a four year degree is a great decision, and I encourage everyone to pursue their Bachelor’s – and beyond.  But if you have already tried to go to school, or already have a degree but are thinking of switching fields, four years of your life is a long time to go back and redo things.  Getting your online degree will save you time by allowing you to work at your own pace, set your own schedule, and take only the classes you’re interested in. 10) Variety of degrees and programs.  If you’re not sure what you’re interested in, or if you only need a few classes here and there to finish a program, online classes are perfect for you.  You can pick and choose what you’d like, and sample a wide variety without spending too much money. I hope I’ve outlined for everyone just a few ways that they could benefit from online learning.  If you’re still unconvinced, just check out a few of my other articles that expand more on these topics. Here are the blog posts relevent to developer trainings: Developer Training - Importance and Significance - Part 1 Developer Training – Employee Morals and Ethics – Part 2 Developer Training – Difficult Questions and Alternative Perspective - Part 3 Developer Training – Various Options for Developer Training – Part 4 Developer Training – A Conclusive Summary- Part 5 Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Developer Training, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Developer Training

    Read the article

  • Joins in single-table queries

    - by Rob Farley
    Tables are only metadata. They don’t store data. I’ve written something about this before, but I want to take a viewpoint of this idea around the topic of joins, especially since it’s the topic for T-SQL Tuesday this month. Hosted this time by Sebastian Meine (@sqlity), who has a whole series on joins this month. Good for him – it’s a great topic. In that last post I discussed the fact that we write queries against tables, but that the engine turns it into a plan against indexes. My point wasn’t simply that a table is actually just a Clustered Index (or heap, which I consider just a special type of index), but that data access always happens against indexes – never tables – and we should be thinking about the indexes (specifically the non-clustered ones) when we write our queries. I described the scenario of looking up phone numbers, and how it never really occurs to us that there is a master list of phone numbers, because we think in terms of the useful non-clustered indexes that the phone companies provide us, but anyway – that’s not the point of this post. So a table is metadata. It stores information about the names of columns and their data types. Nullability, default values, constraints, triggers – these are all things that define the table, but the data isn’t stored in the table. The data that a table describes is stored in a heap or clustered index, but it goes further than this. All the useful data is going to live in non-clustered indexes. Remember this. It’s important. Stop thinking about tables, and start thinking about indexes. So let’s think about tables as indexes. This applies even in a world created by someone else, who doesn’t have the best indexes in mind for you. I’m sure you don’t need me to explain Covering Index bit – the fact that if you don’t have sufficient columns “included” in your index, your query plan will either have to do a Lookup, or else it’ll give up using your index and use one that does have everything it needs (even if that means scanning it). If you haven’t seen that before, drop me a line and I’ll run through it with you. Or go and read a post I did a long while ago about the maths involved in that decision. So – what I’m going to tell you is that a Lookup is a join. When I run SELECT CustomerID FROM Sales.SalesOrderHeader WHERE SalesPersonID = 285; against the AdventureWorks2012 get the following plan: I’m sure you can see the join. Don’t look in the query, it’s not there. But you should be able to see the join in the plan. It’s an Inner Join, implemented by a Nested Loop. It’s pulling data in from the Index Seek, and joining that to the results of a Key Lookup. It clearly is – the QO wouldn’t call it that if it wasn’t really one. It behaves exactly like any other Nested Loop (Inner Join) operator, pulling rows from one side and putting a request in from the other. You wouldn’t have a problem accepting it as a join if the query were slightly different, such as SELECT sod.OrderQty FROM Sales.SalesOrderHeader AS soh JOIN Sales.SalesOrderDetail as sod on sod.SalesOrderID = soh.SalesOrderID WHERE soh.SalesPersonID = 285; Amazingly similar, of course. This one is an explicit join, the first example was just as much a join, even thought you didn’t actually ask for one. You need to consider this when you’re thinking about your queries. But it gets more interesting. Consider this query: SELECT SalesOrderID FROM Sales.SalesOrderHeader WHERE SalesPersonID = 276 AND CustomerID = 29522; It doesn’t look like there’s a join here either, but look at the plan. That’s not some Lookup in action – that’s a proper Merge Join. The Query Optimizer has worked out that it can get the data it needs by looking in two separate indexes and then doing a Merge Join on the data that it gets. Both indexes used are ordered by the column that’s indexed (one on SalesPersonID, one on CustomerID), and then by the CIX key SalesOrderID. Just like when you seek in the phone book to Farley, the Farleys you have are ordered by FirstName, these seek operations return the data ordered by the next field. This order is SalesOrderID, even though you didn’t explicitly put that column in the index definition. The result is two datasets that are ordered by SalesOrderID, making them very mergeable. Another example is the simple query SELECT CustomerID FROM Sales.SalesOrderHeader WHERE SalesPersonID = 276; This one prefers a Hash Match to a standard lookup even! This isn’t just ordinary index intersection, this is something else again! Just like before, we could imagine it better with two whole tables, but we shouldn’t try to distinguish between joining two tables and joining two indexes. The Query Optimizer can see (using basic maths) that it’s worth doing these particular operations using these two less-than-ideal indexes (because of course, the best indexese would be on both columns – a composite such as (SalesPersonID, CustomerID – and it would have the SalesOrderID column as part of it as the CIX key still). You need to think like this too. Not in terms of excusing single-column indexes like the ones in AdventureWorks2012, but in terms of having a picture about how you’d like your queries to run. If you start to think about what data you need, where it’s coming from, and how it’s going to be used, then you will almost certainly write better queries. …and yes, this would include when you’re dealing with regular joins across multiples, not just against joins within single table queries.

    Read the article

  • SQLAuthority News – 7th Anniversary of Blog – A Personal Note

    - by Pinal Dave
    Special Day Today is a very special day – seven years ago I blogged for the very first time.  Seven years ago, I didn’t know what I was doing, I didn’t know how to blog, or even what a blog was or what to write.  I was working as a DBA, and I was trying to solve a problem – at my job, there were a few issues I had to fix again and again and again.  There were days when I was rewriting the same solution over and over, and there were times when I would get very frustrated because I could not write the same elegant solution that I had written before.  I came up with a solution to this problem – posting these solutions online, where I could access them whenever I needed them.  At that point, I had no idea what a blog was, or even how the internet worked, I had no idea that a blog would be visible to others.  Can you believe it? Google it on Yahoo! After a few posts on this “blog,” there was a surprise for me – an e-mail saying that someone had left me a comment.  I was surprised, because I didn’t even know you could comment on a blog!  I logged on and read my comment.  It said: “I like your script,but there is a small bug.  If you could fix it, it will run on multiple other versions of SQL Server.”  I was like, “wow, someone figured out how to find my blog, and they figured out how to fix my script!”  I found the bug, I fixed the script, and a wrote a thank you note to the guy.  My first question for him was: how did you figure it out – not the script, but how to find my blog?  He said he found it from Yahoo Search (this was in the time before Google, believe it or not). From that day, my life changed.  I wrote a few more posts, I got a few more comments, and I started to watch my traffic.  People were reading, commenting, and giving feedback.  At the end of the day, people enjoyed what I was writing.  This was a fantastic feeling!  I never thought I would be writing for others.  Even today, I don’t feel like I am writing for others, but that I am simply posting what I am learning every day.  From that very first day, I decided that I would not change my intent or my blog’s purpose. 72 Million Views – 2600 Posts – 57000 comments – 10 books – 9 courses Today, this blog is my habit, my addiction, my baby.  Every day I try to learn something new, and that lesson gets posted on the blog.  Lately there have been days where I am traveling for a full 24 hours, but even on those days I try to learn something new, and later when I have free time, I will still post it to the blog.  Because of this habit, this blog has over 72 millions views, I have written more than 2600 posts, and there are 57,000 comments and counting.  I have also written 10 books, 9 courses, and learned so many things.  This blog has given me back so much more than I ever put it into it.  It gave me an education, a reason to learn something new every day, and a way to connect to people.  I like to think of it as a learning chain, a relay where we all pass knowledge from one to another. Never Ending Journey When I started the blog, I thought I would write for a few days and stop, but now after seven years I haven’t stopped and I have no intention of stopping!  However, change happens, and for this blog it will start today.  This blog started as a single resource for SQL Server, but now it has grown beyond, to Sharepoint, Personal Development, Developer Training, MySQL, Big Data, and lots of other things.  Truly speaking, this blog is more than just SQL Server, and that was always my intention.  I named it “SQL Authority,” not “SQL Server Authority”!  Loudly and clearly, I would like to announce that I am going to go back to my roots and start writing more about SQL, more about big data, and more about the other technology like relational databases, MySQL, Oracle, and others.  My goal is not to become a comprehensive resource for every technology, my goal is to learn something new every day – and now it can be so much more than just SQL Server.  I will learn it, and post it here for you. I have written a very long post on this anniversary, but here is the summary: Thank You.  You all have been wonderful.  Seven years is a long journey, and it makes me emotional.  I have been “with” this blog before I met my wife, before we had our daughter.  This blog is like a fourth member of the family.  Keep reading, keep commenting, keep supporting.  Thank you all. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: About Me, MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL

    Read the article

  • Best Practices - Dynamic Reconfiguration

    - by jsavit
    This post is one of a series of "best practices" notes for Oracle VM Server for SPARC (formerly named Logical Domains) Overview of dynamic Reconfiguration Oracle VM Server for SPARC supports Dynamic Reconfiguration (DR), making it possible to add or remove resources to or from a domain (virtual machine) while it is running. This is extremely useful because resources can be shifted to or from virtual machines in response to load conditions without having to reboot or interrupt running applications. For example, if an application requires more CPU capacity, you can add CPUs to improve performance, and remove them when they are no longer needed. You can use even use Dynamic Resource Management (DRM) policies that automatically add and remove CPUs to domains based on load. How it works (in broad general terms) Dynamic Reconfiguration is done in coordination with Solaris, which recognises a hypervisor request to change its virtual machine configuration and responds appropriately. In essence, Solaris receives a message saying "you now have 16 more CPUs numbered 16 to 31" or "8GB more RAM starting at address X" or "here's a new network or disk device - have fun with it". These actions take very little time. Solaris then can start using the new resource. In the case of added CPUs, that means dispatching processes and potentially binding interrupts to the new CPUs. For memory, Solaris adds the new memory pages to its "free" list and starts using them. Comparable actions occur with network and disk devices: they are recognised by Solaris and then used. Removing is the reverse process: after receiving the DR message to free specific CPUs, Solaris unbinds interrupts assigned to the CPUs and stops dispatching process threads. That takes very little time. primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 1.0% 6d 22h 29m ldom1 active -n---- 5000 16 8G 0.9% 6h 59m primary # ldm set-core 5 ldom1 primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 0.2% 6d 22h 29m ldom1 active -n---- 5000 40 8G 0.1% 6h 59m primary # ldm set-core 2 ldom1 primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 1.0% 6d 22h 29m ldom1 active -n---- 5000 16 8G 0.9% 6h 59m Memory pages are vacated by copying their contents to other memory locations and wiping them clean. Solaris may have to swap memory contents to disk if the remaining RAM isn't enough to hold all the contents. For this reason, deallocating memory can take longer on a loaded system. Even on a lightly loaded system it took several 7 or 8 seconds to switch the domain below between 8GB and 24GB of RAM. primary # ldm set-mem 24g ldom1 primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 0.1% 6d 22h 36m ldom1 active -n---- 5000 16 24G 0.2% 7h 6m primary # ldm set-mem 8g ldom1 primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 0.7% 6d 22h 37m ldom1 active -n---- 5000 16 8G 0.3% 7h 7m What if the device is in use? (this is the anecdote that inspired this blog post) If CPU or memory is being removed, releasing it pretty straightforward, using the method described above. The resources are released, and Solaris continues with less capacity. It's not as simple with a network or I/O device: you don't want to yank a device out from underneath an application that might be using it. In the following example, I've added a virtual network device to ldom1 and want to take it away, even though it's been plumbed. primary # ldm rm-vnet vnet19 ldom1 Guest LDom returned the following reason for failing the operation: Resource Information ---------------------------------------------------------- ----------------------- /devices/virtual-devices@100/channel-devices@200/network@1 Network interface net1 VIO operation failed because device is being used in LDom ldom1 Failed to remove VNET instance That's what I call a helpful error message - telling me exactly what was wrong. In this case the problem is easily solved. I know this NIC is seen in the guest as net1 so: ldom1 # ifconfig net1 down unplumb Now I can dispose of it, and even the virtual switch I had created for it: primary # ldm rm-vnet vnet19 ldom1 primary # ldm rm-vsw primary-vsw9 If I had to take away the device disruptively, I could have used ldm rm-vnet -f but that could disrupt whoever was using it. It's better if that can be avoided. Summary Oracle VM Server for SPARC provides dynamic reconfiguration, which lets you modify a guest domain's CPU, memory and I/O configuration on the fly without reboot. You can add and remove resources as needed, and even automate this for CPUs by setting up resource policies. Taking things away can be more complicated than giving, especially for devices like disks and networks that may contain application and system state or be involved in a transaction. LDoms and Solaris cooperative work together to coordinate resource allocation and de-allocation in a safe and effective way. For best practices, use dynamic reconfiguration to make the best use of your system's resources.

    Read the article

  • Are IE9 really good ?

    - by anirudha
    IE9 started a campaign for kill IE6 from the core because they know that IE6 is a big trouble or  problem for them for promote 9 version of IE. so they started a campaign for killing IE6. next time they kill IE 7 , 8,9 whenever they found this old version have a big problem for them to promote next version of IE.   Why they not make a update system who automatically update the browser and tell user to restart and update goes installed in the user system. well IE9 should learn from all other that they have very well design auto-update system who never give user in trouble that your browser goes old. Chrome and Firefox both update themselves and say user restart to enjoy another good version. in IE6 a big problem is that updates. no one sure that they installed new version of IE6 without any hassles and update goes install without any problem because they really know or care about “you need this to install this and this for this” so they thing “why I update IE whenever I am unsure that my browser goes update and I have no problem again” so they do nothing because their work done with no problem because common person used high profile application who work even in IE6. so they do nothing.    IE6 countdown website have designed a banner for warn or force user to upgrade to next version of IE. well there is no good reason for put the banner on website some of reason are:-   Windows 7 comes with pre-installed IE8 and Vista comes with upgrade version them IE6 so that is sure that you force a user who have Windows XP [luna] and if they want to upgrade IE then they can get IE8 not version 9 because IE9 is design for Windows 7 or Vista Service pack 2. so What is the use of update when user still have a outdate version too because IE8 is old version and not have any capability of HTML5 so forcing user by using the banner have no sense. I am not know why they all listed on website put the banner on their own website. it’s good that you offer user what they want instead of giving them a outdate version of IE again. My means to give a user list of browser they can try to enhance their browser experience instead of only IE.   IE9 build upon WPF and they spent more time on using WPF in IE instead of making user experience browser.  many thing is designed wrongly in IE first thing is tabs. the tabs in chrome are bigger and easily to move and same in Firefox even not have smooth tabbing. IE have same tabbing as chrome have but leak a point that it’s too small. if you really  want to move then sometime they create a problem that they going elsewhere from the current instance of IE.   Chrome have a big buttons, tabs and menu to enhance browser experience and Firefox have a good feature that you can make them bigger or small. you can put the icon for add-ons on the toolbar for easily use but IE have no relation with customization so we never can thinking about that.   When chrome provide lot’s of extensions and a  webstore for browser application and same feature in Firefox can be seen then there is no plugin in IE. really you can see their IE addons Website where no plugin listed for web development. even in the category or tag. as a response from many blog there is new for developer that new version of IE9 developer tool. well IE9 have three new tabs a blogger tell on their blog. when I trying them I found many thing but I still unable to edit the Css from the HTML tab and no plugin I found I can get to enhance IE9 web development. something more other provide never IE9 give me like personas , customization , browser extension or any other they used to tell a small thing customization  .   IE9 still have some problem with JavaScript that when I use Firefox and chrome and logout in both then my cookie is deleted but in IE it’s not done. it’s show me that IE9 still have different from other not for good thing even some bad thing too. When I trying to read a article that is written in Hindi using Unicode font I found that they show many thing misspelled. there is three Sha in Hindi but they all goes wrong in IE. the misprint thing is not that the writing  for the articles goes wrong. it’s problem or browser to rendering a font. the Firefox and chrome not give me this problem even opera render the font in italic style by decrease the font-size but all those work perfect.   in Pwn2Own the apple’s safari  and IE9 both are hacked. this is a awesome news for whose who thing that  open-source is lose in  Security and close-source is highly-secured software. well this is not a good parameter for talking about software. it’s should depend how much application tested and used. because more testing and more use of application make them better.   I  appreciate IE to making their new version 9 and good luck for them. there is a another matter that I personally found nothing on them.

    Read the article

  • Custom sectionGroup and Section App.config

    - by fampinheiro
    <configSections> <section name="castle" type="Castle.Windsor.Configuration.AppDomain.CastleSectionhandler, Castle.Windsor" /> <sectionGroup name="codegarten"> <section name="configuration" type="Tmp.StartupCodegartenConfigSection, Tmp" /> <section name="apache" type="Tmp.StartupApacheConfigSection, Tmp" /> </sectionGroup> </configSections> When i use msdn main to see all the sections i get this error, Unhandled Exception: System.Configuration.ConfigurationErrorsException: An error occurred creating the configuration section handler for codegarten/apache: Coul d not load type 'Tmp.StartupApacheConfigSection' from assembly 'Tmp'. (D:\Codega rten\trunk\Codegarten\Tmp\bin\Debug\Tmp.exe.Config line 8) ---> System.TypeLoadE xception: Could not load type 'Tmp.StartupApacheConfigSection' from assembly 'Tm p'. at System.Configuration.TypeUtil.GetTypeWithReflectionPermission(IInternalCon figHost host, String typeString, Boolean throwOnError) at System.Configuration.MgmtConfigurationRecord.CreateSectionFactory(FactoryR ecord factoryRecord) at System.Configuration.BaseConfigurationRecord.FindAndEnsureFactoryRecord(St ring configKey, Boolean& isRootDeclaredHere) --- End of inner exception stack trace --- at System.Configuration.BaseConfigurationRecord.FindAndEnsureFactoryRecord(St ring configKey, Boolean& isRootDeclaredHere) at System.Configuration.BaseConfigurationRecord.GetSectionRecursive(String co nfigKey, Boolean getLkg, Boolean checkPermission, Boolean getRuntimeObject, Bool ean requestIsHere, Object& result, Object& resultRuntimeObject) at System.Configuration.ConfigurationSectionCollection.Get(String name) at System.Configuration.ConfigurationSectionCollection.<GetEnumerator>d__0.Mo veNext() at Tmp.Program.ShowSectionGroupInfo(ConfigurationSectionGroup sectionGroup) i n D:\Codegarten\trunk\Codegarten\Tmp\Program.cs:line 53 at Tmp.Program.ShowSectionGroupCollectionInfo(ConfigurationSectionGroupCollec tion sectionGroups) in D:\Codegarten\trunk\Codegarten\Tmp\Program.cs:line 30 at Tmp.Program.Main(String[] args) in D:\Codegarten\trunk\Codegarten\Tmp\Prog ram.cs:line 22 Thanks

    Read the article

  • Spikes in Socket Performance

    - by Harun Prasad
    We are facing random spikes in high throughput transaction processing system using sockets for IPC. Below is the setup used for the run: The client opens and closes new connection for every transaction, and there are 4 exchanges between the server and the client. We have disabled the TIME_WAIT, by setting the socket linger (SO_LINGER) option via getsockopt as we thought that the spikes were caused due to the sockets waiting in TIME_WAIT. There is no processing done for the transaction. Only messages are passed. OS used Centos 5.4 The average round trip time is around 3 milli seconds, but some times the round trip time ranges from 100 milli seconds to couple of seconds. Steps used for Execution and Measurement and output Starting the server $ python sockServerLinger.py /dev/null & Starting the client to post 1 million transactions to the server. And logs the time for a transaction in the client.log file. $ python sockClient.py 1000000 client.log Once the execution finishes the following command will show the execution time greater than 100 milliseconds in the format <line_number>:<execution_time>. $ grep -n "0.[1-9]" client.log | less Below is the example code for Server and Client. Server # File: sockServerLinger.py import socket, traceback,time import struct host = '' port = 9999 l_onoff = 1 l_linger = 0 lingeropt = struct.pack('ii', l_onoff, l_linger) s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, lingeropt) s.bind((host, port)) s.listen(1) while 1: try: clientsock, clientaddr = s.accept() print "Got connection from", clientsock.getpeername() data = clientsock.recv(1024*1024*10) #print "asdasd",data numsent=clientsock.send(data) data1 = clientsock.recv(1024*1024*10) numsent=clientsock.send(data) ret = 1 while(ret>0): data1 = clientsock.recv(1024*1024*10) ret = len(data) clientsock.close() except KeyboardInterrupt: raise except: print traceback.print_exc() continue Client # File: sockClient.py import socket, traceback,sys import time i = 0 while 1: try: st = time.time() s = socket.socket(socket.AF_INET,socket.SOCK_STREAM) while (s.connect_ex(('127.0.0.1',9999)) != 0): continue numsent=s.send("asd"*1000) response = s.recv(6000) numsent=s.send("asd"*1000) response = s.recv(6000) i+=1 if i == int(sys.argv[1]): break except KeyboardInterrupt: raise except: print "in exec:::::::::::::",traceback.print_exc() continue print time.time() -st

    Read the article

  • Address Match Key Algorithm

    - by sestocker
    I have a list of addresses in two separate tables that are slightly off that I need to be able to match. For example, the same address can be entered in multiple ways: 110 Test St 110 Test St. 110 Test Street Although simple, you can imagine the situation in more complex scenerios. I am trying to develop a simple algorithm that will be able to match the above addresses as a key. For example. the key might be "11TEST" - first two of 110, first two of Test and first two of street variant. A full match key would also include first 5 of the zipcode as well so in the above example, the full key might look like "11TEST44680". I am looking for ideas for an effective algorithm or resources I can look at for considerations when developing this. Any ideas can be pseudo code or in your language of choice. We are only concerned with US addresses. In fact, we are only looking at addresses from 250 zip codes from Ohio and Michigan. We also do not have access to any postal software although would be open to ideas for cost effective solutions (it would essentially be a one time use). Please be mindful that this is an initial dump of data from a government source so suggestions of how users can clean it are helpful as I build out the application but I would love to have the best initial I possibly can by being able to match addresses as best as possible.

    Read the article

  • Reading from a file, atoi() returns zero only on first element

    - by Nazgulled
    Hi, I don't understand why atoi() is working for every entry but the first one. I have the following code to parse a simple .csv file: void ioReadSampleDataUsers(SocialNetwork *social, char *file) { FILE *fp = fopen(file, "r"); if(!fp) { perror("fopen"); exit(EXIT_FAILURE); } char line[BUFSIZ], *word, *buffer, name[30], address[35]; int ssn = 0, arg; while(fgets(line, BUFSIZ, fp)) { line[strlen(line) - 2] = '\0'; buffer = line; arg = 1; do { word = strsep(&buffer, ";"); if(word) { switch(arg) { case 1: printf("[%s] - (%d)\n", word, atoi(word)); ssn = atoi(word); break; case 2: strcpy(name, word); break; case 3: strcpy(address, word); break; } arg++; } } while(word); userInsert(social, name, address, ssn); } fclose(fp); } And the .csv sample file is this: 900011000;Jon Yang;3761 N. 14th St 900011001;Eugene Huang;2243 W St. 900011002;Ruben Torres;5844 Linden Land 900011003;Christy Zhu;1825 Village Pl. 900011004;Elizabeth Johnson;7553 Harness Circle But this is the output: [900011000] - (0) [900011001] - (900011001) [900011002] - (900011002) [900011003] - (900011003) [900011004] - (900011004) What am I doing wrong?

    Read the article

  • T-SQL Table Joins - Unique Situation

    - by Dimitri
    Hello Everyone. This is my first time encountering the case like this and don't quite know how to handle. Situation: I have one table tblSettingsDefinition, with fields: ID, GroupID, Name, typeID, DefaultValue. Then I have tblSettingtypes with fields TypeID, Name. And I have final table, tblUserSettings with fields SettingID, SettingDefinitionID, UserID, Value. The whole point of this is to have customizable settings. Setting can be defined for a Group or as global setting (if GroupID is NULL). It will have a default value, but if user modifies the setting, an entry is added to tblUserSettings that stores new value. I want to have a query that grabs user settings by first looking at the tblUserSettings, and if it has records for the given user, grabs them, if not retrieves default settings. But the trick is that no matter if user has settings or not, I need to have fields from other two table retrieved to know the setting's Type, Name etc... (which are stored in those other tables). I'm writing query something like this: SELECT * FROM tblSettingDefinition SD LEFT JOIN tblUserSettings US ON SD.SettingID = US.SettingDefinitionID JOIN tblSettingTypes ST ON SD.TypeID=ST.ID WHERE US.UserID=@UserID OR ((SD.GroupID IS NULL) OR (SD.GroupID=(SELECT GroupID FROM tblUser WHERE ID=@UserID))) but it retrieves settings for all users from tblUserSettings instead of just ones that match current @UserID. And if @UserID has no records in tblUserSettings, still, all user settings are retrieved instead of the defaults from tblSettingDefinition. Hope I made myself clear. Any help would be highly appreciated. Thank you.

    Read the article

  • Is there any other efficient way to use table variable instead of using temporary table

    - by varta shrimali
    we are writing script to display banners on a web page where we are using temporary table in mysql procedure. Is there any other efficient way to use table variable instead of using temporary table we are using following code: -- banner location CURSOR -- DECLARE banner_location_cursor CURSOR FOR select bm.id as masterId, bm.section as masterName, bs.id as locationId, bs.sectionName as locationName from banner_master as bm inner join banner_section as bs on bm.id=bs.masterId where bm.section=sCode ; -- DECLARE banner CURSORS DECLARE banner_cursor CURSOR FOR SELECT bd.id as bannerId, bd.sectionId, bd.bannerName, bd.websiteURL, bd.paymentType, bd.status, bd.startDate, bd.endDate, bd.bannerDisplayed, bs.id, bs.sectionName from banner_detail as bd inner join banner_section as bs on bs.id=bd.sectionId where bs.id= location_id and bd.status='A' and (dates between cast(bd.startDate as DATE) and cast(bd.endDate as DATE)) order by rand(), bd.bannerDisplayed asc limit 1 ; DECLARE CONTINUE HANDLER FOR NOT FOUND SET no_more_rows = 1; SET dates = (select curdate()); -- RESULTS TABLE WHICH WILL BE RETURNED -- CREATE temporary TABLE test ( b_id INT, s_id INT, b_name varchar(128), w_url varchar(128), p_type varchar(128), st char(1), s_date datetime, e_date datetime, b_display int, sec_id int, s_name varchar(128) ); -- OPEN banner location CURSOR OPEN banner_location_cursor; the_loop: LOOP FETCH banner_location_cursor INTO master_id, master_name, location_id, location_name; IF no_more_rows THEN CLOSE banner_location_cursor; leave the_loop; END IF; OPEN banner_cursor; -- select FOUND_ROWS(); the_loop2: LOOP FETCH banner_cursor INTO banner_id, section_id, banner_name, website_url, payment, status, start_date, end_date, banner_displayed, sec_id, section_name; IF no_more_rows THEN set no_more_rows = 0; CLOSE banner_cursor; leave the_loop2; END IF; INSERT INTO test ( b_id, s_id, b_name , w_url, p_type, st, s_date, e_date, b_display, sec_id, s_name ) VALUES ( banner_id, section_id, banner_name, website_url, payment, status, start_date, end_date, banner_displayed, sec_id, section_name ); UPDATE banner_detail set bannerDisplayed = (banner_displayed+1) where id = banner_id; END LOOP the_loop2; END LOOP the_loop; -- RETURN result SELECT * FROM test; -- DROP RESULTS TABLE DROP TABLE test; END

    Read the article

  • SQL query in JSP file pulling variable from VXML file

    - by s1066
    Hi I'm trying to get an SQL query to work within a JSP file. The JSP file is pulled by a VXML file here is my JSP file code: <?xml version="1.0"?> <%@ page import="java.util.*" %> <%@ page import="java.sql.*" %> <% boolean success = true; // Always optimistic String info = ""; String schoolname = request.getParameter("schoolname"); String informationtype = request.getParameter("informationtype"); try { Class.forName("org.postgresql.Driver"); String connectString = "jdbc:postgresql://localhost:5435/N0176359"; String user = "****"; String password = "*****"; Connection conn = DriverManager.getConnection(connectString, user, password); Statement st = conn.createStatement(); ResultSet rsvp = st.executeQuery("SELECT * FROM lincolnshire_school_information_new WHERE school_name=\'"+schoolname+"\'"); rsvp.next(); info = rsvp.getString(2); }catch (ClassNotFoundException e) { success = false; // something went wrong } %> As you can see I'm trying to insert the value of the variable declared as "schooname" into the end of the SQL query. However when I come to run the jsp file it doesn't work and I get an error "ResultSet not positioned properly". When I put a standard query in (without trying to make it value of the variable it works fine) Hope that makes sense, and thank you for any help!

    Read the article

  • ajax to populate an input type text

    - by kawtousse
    hi, I have an input type text that i want to populate it with a value from data base using the ajax technique. first i define my text zone like the following: <td><input type=text id='st' value=" " name='stname' onclick="donnom();" /></td> in javascript i do the following: xhr5.onreadystatechange = function(){ if(xhr5.readyState == 4 && xhr5.status == 200) { selects5 = xhr5.responseText; // On se sert de innerHTML pour rajouter les options a la liste document.getElementById('st').innerHTML = selects5; } }; xhr5.open("POST","ajaxIDentifier5.jsp",true); xhr5.setRequestHeader('Content-Type','application/x-www-form-urlencoded'); id=document.getElementById(idIdden).value; xhr5.send("id="+id); in IDentifier5.jsp i put the next code: '<%String id=request.getParameter("id"); System.out.println("idDailyTimeSheet ajaxIDentifier5 as is:"+id); Session s = null; Transaction tx; try { s= HibernateUtil.currentSession(); tx=s.beginTransaction(); Query query = s.createQuery("select from Dailytimesheet dailytimesheet where dailytimesheet.IdDailyTimeSheet="+id+" " ); for(Iterator it=query.iterate();it.hasNext();) { if(it.hasNext()) { Dailytimesheet object=(Dailytimesheet)it.next(); out.print( "<input type=\"text\" id=\"st1\" value=\""+object.getTimeFrom()+"\" name=\"starting\" onclick=\"donnom()\" ></input>"); } } }catch (HibernateException e) { e.printStackTrace();} %> i want to get only the value in the input type text populated from database because after that i will be able to change it . thanks for help.

    Read the article

  • Performance Difference between HttpContext user and Thread user

    - by atrueresistance
    I am wondering what the difference between HttpContext.Current.User.Identity.Name.ToString.ToLower and Thread.CurrentPrincipal.Identity.Name.ToString.ToLower. Both methods grab the username in my asp.net 3.5 web service. I decided to figure out if there was any difference in performance using a little program. Running from full Stop to Start Debugging in every run. Dim st As DateTime = DateAndTime.Now Try 'user = HttpContext.Current.User.Identity.Name.ToString.ToLower user = Thread.CurrentPrincipal.Identity.Name.ToString.ToLower Dim dif As TimeSpan = Now.Subtract(st) Dim break As String = "nothing" Catch ex As Exception user = "Undefined" End Try I set a breakpoint on break to read the value of dif. The results were the same for both methods. dif.Milliseconds 0 Integer dif.Ticks 0 Long Using a longer duration, loop 5,000 times results in these figures. Thread Method run 1 dif.Milliseconds 125 Integer dif.Ticks 1250000 Long run 2 dif.Milliseconds 0 Integer dif.Ticks 0 Long run 3 dif.Milliseconds 0 Integer dif.Ticks 0 Long HttpContext Method run 1 dif.Milliseconds 15 Integer dif.Ticks 156250 Long run 2 dif.Milliseconds 156 Integer dif.Ticks 1562500 Long run 3 dif.Milliseconds 0 Integer dif.Ticks 0 Long So I guess what is more prefered, or more compliant with webservice standards? If there is some type of a performance advantage, I can't really tell. Which one scales to larger environments easier?

    Read the article

  • How to structure an index for type ahead for extremely large dataset using Lucene or similar?

    - by Pete
    I have a dataset of 200million+ records and am looking to build a dedicated backend to power a type ahead solution. Lucene is of interest given its popularity and license type, but I'm open to other open source suggestions as well. I am looking for advice, tales from the trenches, or even better direct instruction on what I will need as far as amount of hardware and structure of software. Requirements: Must have: The ability to do starts with substring matching (I type in 'st' and it should match 'Stephen') The ability to return results very quickly, I'd say 500ms is an upper bound. Nice to have: The ability to feed relevance information into the indexing process, so that, for example, more popular terms would be returned ahead of others and not just alphabetical, aka Google style. In-word substring matching, so for example ('st' would match 'bestseller') Note: This index will purely be used for type ahead, and does not need to serve standard search queries. I am not worried about getting advice on how to set up the front end or AJAX, as long as the index can be queried as a service or directly via Java code. Up votes for any useful information that allows me to get closer to an enterprise level type ahead solution

    Read the article

  • Copying a subset of data to an empty database with the same schema

    - by user193655
    I would like to export part of a database full of data to an empty database. Both databases has the same schema. I want to maintain referential integrity. To simplify my cases it is like this: MainTable has the following fields: 1) MainID integer PK 2) Description varchar(50) 3) ForeignKey integer FK to MainID of SecondaryTable SecondaryTable has the following fields: 4) MainID integer PK (referenced by (3)) 5) AnotherDescription varchar(50) The goal I'm trying to accomplish is "export all records from MainTable using a WHERE condition", for example all records where MainID < 100. To do it manually I shuold first export all data from SecondaryTable contained in this select: select * from SecondaryTable ST outer join PrimaryTable PT on ST.MainID=PT.MainID then export the needed records from MainTable: select * from MainTable where MainID < 100. This is manual, ok. Of course my case is much much much omre complex, I have 200+ tables, so donig it manually is painful/impossible, I have many cascading FKs. Is there a way to force the copy of main table only "enforcing referntial integrity". so that my query is something like: select * from MainTable where MainID < 100 WITH "COPYING ALL FK sources" In this cases also the field (5) will be copied. ====================================================== Is there a syntax or a tool to do this? Table per table I'd like to insert conditions (like MainID <100 is only for MainTable, but I have also other tables).

    Read the article

  • Faster way to split a string and count characters using R?

    - by chrisamiller
    I'm looking for a faster way to calculate GC content for DNA strings read in from a FASTA file. This boils down to taking a string and counting the number of times that the letter 'G' or 'C' appears. I also want to specify the range of characters to consider. I have a working function that is fairly slow, and it's causing a bottleneck in my code. It looks like this: ## ## count the number of GCs in the characters between start and stop ## gcCount <- function(line, st, sp){ chars = strsplit(as.character(line),"")[[1]] numGC = 0 for(j in st:sp){ ##nested ifs faster than an OR (|) construction if(chars[[j]] == "g"){ numGC <- numGC + 1 }else if(chars[[j]] == "G"){ numGC <- numGC + 1 }else if(chars[[j]] == "c"){ numGC <- numGC + 1 }else if(chars[[j]] == "C"){ numGC <- numGC + 1 } } return(numGC) } Running Rprof gives me the following output: > a = "GCCCAAAATTTTCCGGatttaagcagacataaattcgagg" > Rprof(filename="Rprof.out") > for(i in 1:500000){gcCount(a,1,40)}; > Rprof(NULL) > summaryRprof(filename="Rprof.out") self.time self.pct total.time total.pct "gcCount" 77.36 76.8 100.74 100.0 "==" 18.30 18.2 18.30 18.2 "strsplit" 3.58 3.6 3.64 3.6 "+" 1.14 1.1 1.14 1.1 ":" 0.30 0.3 0.30 0.3 "as.logical" 0.04 0.0 0.04 0.0 "as.character" 0.02 0.0 0.02 0.0 $by.total total.time total.pct self.time self.pct "gcCount" 100.74 100.0 77.36 76.8 "==" 18.30 18.2 18.30 18.2 "strsplit" 3.64 3.6 3.58 3.6 "+" 1.14 1.1 1.14 1.1 ":" 0.30 0.3 0.30 0.3 "as.logical" 0.04 0.0 0.04 0.0 "as.character" 0.02 0.0 0.02 0.0 $sampling.time [1] 100.74 Any advice for making this code faster?

    Read the article

  • How to get pixel information inside a fragment shader?

    - by user697111
    In my fragment shader I can load a texture, then do this: uniform sampler2D tex; void main(void) { vec4 color = texture2D(tex, gl_TexCoord[0].st); gl_FragColor = color; } That sets the current pixel to color value of texture. I can modify these, etc and it works well. But a few questions. How do I tell "which" pixel I am? For example, say I want to set pixel 100,100 (x,y) to red. Everything else to black. How do I do a : "if currentSelf.Position() == (100,100); then color=red; else color=black?" ? I know how to set colors, but how do I get "my" location? Secondly, how do I get values from a neighbor pixel? I tried this: vec4 nextColor = texture2D(tex, gl_TexCoord[1].st); But not clear what it is returning? if I'm pixel 100,100; how do I get the values from 101,100 or 100,101?

    Read the article

  • Changing html content of a div before and after ajax request

    - by R27
    I am trying to change the button "ADD" (in a div) to some text/img as soon as it is clicked. And after the ajax request is processed, in the success block , I want the div to get the button back. I see the ajax request is itself not getting processed. Can someone explain whats my mistake. I just removed the jsfiddle link and pasting the script here to avoid confusion about the dependencies. JS script var ajax_load = "Please wait..."; jQuery(document).ready(function($) { $("#add_button").click(function(event){ var st = $("#add_div").html(); $("#add_div").html(ajax_load); $("#sform").validate({ errorClass: "error", submitHandler: function (form) { alert('inside submit'); $.ajax({ type: "GET", url: 'form.cgi', data: $("#sform").serialize(), success: function (msg) { alert('msg'); $("#add_div").html(st); $("#sform")[0].reset(); } }); } }); }); }); And the html piece is <form id=sform>LABEL <input id=field1 type=text> <div id="add_div"> <input type="button" value="ADD" id="add_button"> </div> </form> I have jquery.validate.min.js script included.

    Read the article

  • LINQ Joins - Performance

    - by Meiscooldude
    I am curious on how exactly LINQ (not LINQ to SQL) is performing is joins behind the scenes in relation to how Sql Server performs joins. Sql Server before executing a query, generates an Execution Plan. The Execution Plan is basically an Expression Tree on what it believes is the best way to execute the query. Each node provides information on whether to do a Sort, Scan, Select, Join, ect. On a 'Join' node in our execution plan, we can see three possible algorithms; Hash Join, Merge Join, and Nested Loops Join. Sql Server will choose which algorithm to for each Join operation based on expected number of rows in Inner and Outer tables, what type of join we are doing (some algorithms don't support all types of joins), whether we need data ordered, and probably many other factors. Join Algorithms: Nested Loop Join: Best for small inputs, can be optimized with ordered inner table. Merge Join: Best for medium to large inputs sorted inputs, or an output that needs to be ordered. Hash Join: Best for medium to large inputs, can be parallelized to scale linearly. LINQ Query: DataTable firstTable, secondTable; ... var rows = from firstRow in firstTable.AsEnumerable () join secondRow in secondTable.AsEnumerable () on firstRow.Field<object> (randomObject.Property) equals secondRow.Field<object> (randomObject.Property) select new {firstRow, secondRow}; SQL Query: SELECT * FROM firstTable fT INNER JOIN secondTable sT ON fT.Property = sT.Property Sql Server might use a Nested Loop Join if it knows there are a small number of rows from each table, a merge join if it knows one of the tables has an index, and Hash join if it knows there are a lot of rows on either table and neither has an index. Does Linq choose its algorithm for joins? or does it always use one?

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >