Search Results

Search found 5509 results on 221 pages for 'sound effects'.

Page 181/221 | < Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >

  • View Docs and PDFs Directly in Google Chrome

    - by Matthew Guay
    Would you like to view documents, presentations, and PDFs directly in Google Chrome?  Here’s a handy extension that makes Google Docs your default online viewer so don’t have to download the file first. Getting Started By default, when you come across a PDF or other common document file online in Google Chrome, you’ll have to download the file and open it in a separate application. It’d be much easier to simply view online documents directly in Chrome.  To do this, head over to the Docs PDF/PowerPoint Viewer page on the Chrome Extensions site (link below), and click Install to add it to your browser. Click Install to confirm that you want to install this extension. Extensions don’t run by default in Incognito mode, so if you’d like to always view documents directly in Chrome, open the Extensions page and check Allow this extension to run in incognito. Now, when you click a link for a document online, such as a .docx file from Word, it will open in the Google Docs viewer. These documents usually render in their original full-quality.  You can zoom in and out to see exactly what you want, or search within the document.  Or, if it doesn’t look correct, you can click the Download link in the top left to save the original document to your computer and open it in Office.   Even complex PDF render very nicely.  Do note that Docs will keep downloading the document as you’re reading it, so if you jump to the middle of a document it may look blurry at first but will quickly clear up. You can even view famous presentations online without opening them in PowerPoint.  Note that this will only display the slides themselves, but if you’re looking for information you likely don’t need the slideshow effects anyway.   Adobe Reader Conflicts If you already have Adobe Acrobat or Adobe Reader installed on your computer, PDF files may open with the Adobe plugin.  If you’d prefer to read your PDFs with the Docs PDF Viewer, then you need to disable the Adobe plugin.  Enter the following in your Address Bar to open your Chrome Plugins page: chrome://plugins/ and then click Disable underneath the Adobe Acrobat plugin. Now your PDFs will always open with the Docs viewer instead. Performance Who hasn’t been frustrated by clicking a link to a PDF file, only to have your browser pause for several minutes while Adobe Reader struggles to download and display the file?  Google Chrome’s default behavior of simply downloading the files and letting you open them is hardly more helpful.  This extension takes away both of these problems, since it renders the documents on Google’s servers.  Most documents opened fairly quickly in our tests, and we were able to read large PDFs only seconds after clicking their link.  Also, the Google Docs viewer rendered the documents much better than the HTML version in Google’s cache. Google Docs did seem to have problem on some files, and we saw error messages on several documents we tried to open.  If you encounter this, click the Download link in the top left corner to download the file and view it from your desktop instead. Conclusion Google Docs has improved over the years, and now it offers fairly good rendering even on more complex documents.  This extension can make your browsing easier, and help documents and PDFs feel more like part of the Internet.  And, since the documents are rendered on Google’s servers, it’s often faster to preview large files than to download them to your computer. Link Download the Docs PDF/PowerPoint Viewer extension from Google Similar Articles Productive Geek Tips Integrate Google Docs with Outlook the Easy WayGoogle Image Search Quick FixView the Time & Date in Chrome When Hiding Your TaskbarView Maps and Get Directions in Google ChromeHow To Export Documents from Google Docs to Your Computer TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version Zoofs, find the most popular tweeted YouTube videos Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7 Map the Stars with Stellarium

    Read the article

  • Forced Learning

    - by Ben Griswold
    If you ask me, it can be a little intimidating to stand in front of a group and walkthrough anything remotely technical. Even if you know “Technical Thingy #52” inside and out, public speaking can be unsettling.  And if you don’t have your stuff together, well, it can be downright horrifying. With that said, if given the choice, I still like to schedule myself to present on unfamiliar topics. Over the past few months, I’ve talked about Aspect-Oriented Programming, Functional Programming, Lean Software Development and Kanban Systems, Domain-Driven Design and Behavior Driven Development.  What do these topics have in common? You guessed it: I was truly interested in them. I had only a superficial understanding of each. Huh?  Why in the world would I ever want to to put myself in that intimidating situation? Actually, I rarely want to put myself into that situation but I often do as I like the results.  There’s nothing remotely clever going on here.  All I’m doing is putting myself into a compromising situation knowing that I’ll likely work myself out of it by learning the topic prior to the presentation.  I’m simply time-boxing myself to learn something new while knowing there are negative repercussions if I fall short. So, I end up doing tons of research and I learn bunches to ensure I have my head firmly wrap around the material before my talk. I’m not saying I become an expert overnight (or over a couple of weeks) but I’ll definitely know enough to be confident and comfortable and I’ll know more than enough to ensure the audience will learn a thing or two from me. It’s forced learning and though it might sound a little scary to some, it works for me. Now I could very easily rename this post to something like Fear Is My Motivator because, in a sense, fear of failure and embarrassment is what’s driving my learning. However, I’m the guy signing up for the presentation and since the entire process is self-imposed I’m not sure Fear deserves too much credit.  Anyway…

    Read the article

  • How To show document directory save image in thumbnail in cocos2d class

    - by Anil gupta
    I have just implemented multiple photo selection from iphone photo library and i am saving all selected photo in document directory every time as a array, now i want to show all saved images in my class from document directory as a thumbnail, i have tried some logic but my game getting crashing, My code is below. Any help will be appreciate. Thanks in advance. -(id) init { // always call "super" init // Apple recommends to re-assign "self" with the "super" return value if( (self=[super init])) { CCSprite *photoalbumbg = [CCSprite spriteWithFile:@"photoalbumbg.png"]; photoalbumbg.anchorPoint = ccp(0,0); [self addChild:photoalbumbg z:0]; //Background Sound // [[SimpleAudioEngine sharedEngine]playBackgroundMusic:@"Background Music.wav" loop:YES]; CCSprite *photoalbumframe = [CCSprite spriteWithFile:@"photoalbumframe.png"]; photoalbumframe.position = ccp(160,240); [self addChild:photoalbumframe z:2]; CCSprite *frame = [CCSprite spriteWithFile:@"Photo-Frames.png"]; frame.position = ccp(160,270); [self addChild:frame z:1]; /*_____________________________________________________________________________________*/ CCMenuItemImage * upgradebtn = [CCMenuItemImage itemFromNormalImage:@"AlbumUpgrade.png" selectedImage:@"AlbumUpgrade.png" target:self selector:@selector(Upgrade:)]; CCMenu * fMenu = [CCMenu menuWithItems:upgradebtn,nil]; fMenu.position = ccp(200,110); [self addChild:fMenu z:3]; NSError *error; NSFileManager *fM = [NSFileManager defaultManager]; NSString *documentsDirectory = [NSHomeDirectory() stringByAppendingPathComponent:@"Documents"]; NSLog(@"Documents directory: %@", [fM contentsOfDirectoryAtPath:documentsDirectory error:&error]); NSArray *allfiles = [fM contentsOfDirectoryAtPath :documentsDirectory error:&error]; directoryList = [[NSMutableArray alloc] init]; for(NSString *file in allfiles) { NSString *path = [documentsDirectory stringByAppendingPathComponent:file]; [directoryList addObject:file]; } NSLog(@"array file name value ==== %@", directoryList); CCSprite *temp = [CCSprite spriteWithFile:[directoryList objectAtIndex:0]]; [temp setTextureRect:CGRectMake(160.0f, 240.0f, 50,50)]; // temp.anchorPoint = ccp(0,0); [self addChild:temp z:10]; for(UIImage *file in directoryList) { // NSData *pngData = [NSData dataWithContentsOfFile:file]; // image = [UIImage imageWithData:pngData]; NSLog(@"uiimage = %@",image); // UIImage *image = [UIImage imageWithContentsOfFile:file]; for (int i=1; i<=3; i++) { for (int j=1;j<=3; j++) { CCTexture2D *tex = [[[CCTexture2D alloc] initWithImage:file] autorelease]; CCSprite *selectedimage = [CCSprite spriteWithTexture:tex rect:CGRectMake(0, 0, 67, 66)]; selectedimage.position = ccp(100*i,350*j); [self addChild:selectedimage]; } } } } return self; }

    Read the article

  • Listen to Local FM Radio in Windows 7 Media Center

    - by DigitalGeekery
    If you have a supported tuner card and connected FM antenna, you can listen to your favorite local over-the-air FM stations in Windows 7 Media Center. Before the FM radio option will be available in Windows Media Center, you’ll need to have a TV or Radio tuner card installed and configured. If you have a TV tuner card installed, you may already have a Radio tuner as well. Many TV tuner cards also have built in FM tuners. Open Windows Media Center, scroll the “Music” and over to “Radio.” Click on “FM Radio.”   The radio will turn on and you’ll see the current station number listed in the white box. Just below are standard “Seek” and “Tune” buttons, as well as “Preset” options. Tuning works just like a typical FM radio. Click on the (-) or (+) buttons to “Tune” or “Seek” up and down the dial. If you already know the frequency of the station, enter the numbers using the numeric keypad on the remote control or keyboard. To save the current station you’re listening to as a preset, click on the “Save as Preset” button. Type in a custom name for your preset station and click “Save.”   Once you set your presets, they will also be available on the main FM Radio screen. The transport controls at the bottom of the screen also allow you to control Volume, Pause, Play, Skip back, and Skip forward. Fast Forward and Rewind, however, are not supported.   This is a nice option if you’d like to listen to your local FM favorites on your computer, especially if those stations aren’t available online. If you don’t have an FM tuner and want to listen to thousands of online radio streams, check out our article on RadioTime in WMC. Similar Articles Productive Geek Tips Listen to Over 100,000 Radio Stations in Windows Media CenterListen To XM Radio with Windows Media Center in Windows 7Using Netflix Watchnow in Windows Vista Media Center (Gmedia)Schedule Updates for Windows Media CenterIntegrate Hulu Desktop and Windows Media Center in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional The Growth of Citibank Quickly Switch between Tabs in IE Windows Media Player 12: Tweak Video & Sound with Playback Enhancements Own a cell phone, or does a cell phone own you? Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier

    Read the article

  • links for 2010-04-22

    - by Bob Rhubart
    Barry N. Perkins: Unique Business Value vs. Unique IT "Some solutions may look good today, solving a budget challenge by reducing cost, or solving a specific tactical challenge, but result in highly complex environments, that may be difficult to manage and maintain and limit the future potential of your business. Put differently, some solutions might push today's challenge into the future, resulting in a more complex and expensive solution." -- Barry N. Perkins, VP Oracle Modernization & Oracle Integrated Solutions (tags: oracle otn enterprisearchitecture modernization) Paul Homchick: The Information Driven Value Chain - Part 2 Paul Homchick continues his series with a look "at the way investments have been made in enterprise software in an effort to create and manage value, and how systems are moving from a controlled-process approach design towards gathering and using dynamically using information." (tags: oracle otn enterprisearchitecture) @vambenepe: The battle of the Cloud Frameworks: Application Servers redux? "The battle of the Cloud Frameworks has started," says William Vambenepe, "and it will look a lot like the battle of the Application Servers which played out over the last decade and a half." (tags: oracle otn cloud frameworks appserver) @ORACLENERD: COLLABORATE: Day 4 Wrap Up Oraclenerd feesses up: "The day started out with the realization that I pulled off the best (COLLABORATE - self annointed) prank ever. Twitter was...all atwitter about the fact that Mark Rittman was Oracle's Person of the Year. Of course it wasn't true. If you look at the picture, you'll realize that he's wearing exactly the same clothes in the magazine cover as he is in real life." (tags: collaborate2010 oracleace) Oracle's Hal Stern at Cloud Expo: "We've Moved from 'What' to 'How'" | Cloud Computing Journal "Hal also spoke a bit about building 'a sustainable IT model.' By this, he said he didn't mean the various Green IT and similar efforts that 'are all about data center efficiency. I think the operational model is just as important. Many enterprises are managing a tremendous amount of complexity, and it's hard to make this sustainable.'" -- Cloud News Desk (tags: oracle cloud cloudexpo halstern) @ORACLENERD: COLLABORATE: The Beach Party "Then tiki statues somehow were incorporated into various dances" -- Oracle ACE Chet "oraclenerd" Justice (tags: 0racle otn oracleace collaborate2010 oaug ioug lasvegas) David Andrews: Collaborate Day Two "Collaborate 2010 has focused on helping attendees understand what is already available and how to make more effective use of it. This does not sound exciting but it is extremely valuable. Most customers use only a small fraction of the capability of the products they already own. Helping them understand all the additional things they could be doing without buying anything more is very valuable." -- David Andrews (tags: oracle oaug collaborate2010 ioug)

    Read the article

  • Can a Printer Print White?

    - by Jason Fitzpatrick
    The vast majority of the time we all print on white media: white paper, white cardstock, and other neutral white surfaces. But what about printing white? Can modern printers print white and if not, why not? Read on as we explore color theory, printer design choices, and why white is the foundation of the printing process. Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. Image by Coiote O.; available as wallpaper here. The Question SuperUser reader Curious_Kid is well, curious, about printers. He writes: I was reading about different color models, when this question hit my mind. Can the CMYK color model generate white color? Printers use CMYK color mode. What will happen if I try to print a white colored image (rabbit) on a black paper with my printer? Will I get any image on the paper? Does the CMYK color model have room for white? The Answer SuperUser contributor Darth Android offers some insight into the CMYK process: You will not get anything on the paper with a basic CMYK inkjet or laser printer. The CMYK color mixing is subtractive, meaning that it requires the base that is being colored to have all colors (i.e., White) So that it can create color variation through subtraction: White - Cyan - Yellow = Green White - Yellow - Magenta = Red White - Cyan - Magenta = Blue White is represented as 0 cyan, 0 yellow, 0 magenta, and 0 black – effectively, 0 ink for a printer that simply has those four cartridges. This works great when you have white media, as “printing no ink” simply leaves the white exposed, but as you can imagine, this doesn’t work for non-white media. If you don’t have a base color to subtract from (i.e., Black), then it doesn’t matter what you subtract from it, you still have the color Black. [But], as others are pointing out, there are special printers which can operate in the CMYW color space, or otherwise have a white ink or toner. These can be used to print light colors on top of dark or otherwise non-white media. You might also find my answer to a different question about color spaces helpful or informative. Given that the majority of printer media in the world is white and printing pure white on non-white colors is a specialty process, it’s no surprise that home and (most) commercial printers alike have no provision for it. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • New SQL Code Deployment Book and Damn I Need to Blog More

    - by Rodney
    Select datediff(d,'02/19/2009',getdate()) This value returned from the above SELECT statement  is 398 and that is the number of days since my last blog post.  As I was formulating my apology for this hiatus from blogging, it dawned on me that I also do not twitter (sorry tweet) and as apologies beget apologies, I then realized that instead of catching up on the backlog of blogs, I should write a book about what I have been most focused on in the past year, one month and 3 days.  That focus is my day job, which of course, most of us have. And that day job we share is why most of us read blogs, tweets, articles and even books in the first place. So my focus for the past year has been SQL code deployments and all of that entails. I am fortunate that Redgate has agreed to entertain my crazy notion of writing an entire book about this subject, which I have tentatively titled, "The Sound and the Fury". Wait..that is not right. Oh yes, a title more befiting a techical tome but with as much profundity, "Standardizing SQL Server Code Deployements - A Redgate Guide". The great American novel must wait a few more years. As I begin this journey, I am inviting you to assist me in the discovery process and even be interviewed and included in the book itself. How do you do deployments in your company? Do you have a documented process or no process? Do you do code review or cross your fingers? Do you work for a small company or a Fortune 100 company? Government regulations or  garage? It does not  matter to me. I am not here to judge. I worked for both companies myself and have seen many things that you can relate to.  If you would like to participate and are one of the 3 people still reading this blog after 398 days, please fill out my survey and let's get started.  http://www.surveymonkey.com/s/RRG86RH  

    Read the article

  • Is Linear Tape File System (LTFS) Best For Transportable Storage?

    - by rickramsey
    Those of us in tape storage engineering take a lot of pride in what we do, but understand that tape is the right answer to a storage problem only some of the time. And, unfortunately for a storage medium with such a long history, it has built up a few preconceived notions that are no longer valid. When I hear customers debate whether to implement tape vs. disk, one of the common strikes against tape is its perceived lack of usability. If you could go back a few generations of corporate acquisitions, you would discover that StorageTek engineers recognized this problem and started developing a solution where a tape drive could look just like a memory stick to a user. The goal was to not have to care about where files were on the cartridge, but to simply see the list of files that were on the tape, and click on them to open them up. Eventually, our friends in tape over at IBM built upon our work at StorageTek and Sun Microsystems and released the Linear Tape File System (LTFS) feature for the current LTO5 generation of tape drives as an open specification. LTFS is really a wonderful feature and we’re proud to have taken part in its beginnings and, as you’ll soon read, its future. Today we offer LTFS-Open Edition, which is free for you to use in your in Oracle Enterprise Linux 5.5 environment - not only on your LTO5 drives, but also on your Oracle StorageTek T10000C drives. You can download it free from Oracle and try it out. LTFS does exactly what its forefathers imagined. Now you can see immediately which files are on a cartridge. LTFS does this by splitting a cartridge into two partitions. The first holds all of the necessary metadata to create a directory structure for you to easily view the contents of the cartridge. The second partition holds all of the files themselves. When tape media is loaded onto a drive, a complete file system image is presented to the user. Adding files to a cartridge can be as simple as a drag-and-drop just as you do today on your laptop when transferring files from your hard drive to a thumb drive or with standard POSIX file operations. You may be thinking all of this sounds nice, but asking, “when will I actually use it?” As I mentioned at the beginning, tape is not the right solution all of the time. However, if you ever need to physically move data between locations, tape storage with LTFS should be your most cost-effective and reliable answer. I will give you a few use cases examples of when LTFS can be utilized. Media and Entertainment (M&E), Oil and Gas (O&G), and other industries have a strong need for their storage to be transportable. For example, an O&G company hunting for new oil deposits in remote locations takes very large underground seismic images which need to be shipped back to a central data center. M&E operations conduct similar activities when shooting video for productions. M&E companies also often transfers files to third-parties for editing and other activities. These companies have three highly flawed options for transporting data: electronic transfer, disk storage transport, or tape storage transport. The first option, electronic transfer, is impractical because of the expense of the bandwidth required to transfer multi-terabyte files reliably and efficiently. If there’s one place that has bandwidth, it’s your local post office so many companies revert to physically shipping storage media. Typically, M&E companies rely on transporting disk storage between sites even though it, too, is expensive. Tape storage should be the preferred format because as IDC points out, “Tape is more suitable for physical transportation of large amounts of data as it is less vulnerable to mechanical damage during transportation compared with disk" (See note 1, below). However, tape storage has not been used in the past because of the restrictions created by proprietary formats. A tape may only be readable if both the sender and receiver have the same proprietary application used to write the file. In addition, the workflows may be slowed by the need to read the entire tape cartridge during recall. LTFS solves both of these problems, clearing the way for tape to become the standard platform for transferring large files. LTFS is open and, as long as you’ve downloaded the free reader from our website or that of anyone in the LTO consortium, you can read the data. So if a movie studio ships a scene to a third-party partner to add, for example, sounds effects or a music score, it doesn’t have to care what technology the third-party has. If it’s written back to an LTFS-formatted tape cartridge, it can be read. Some tape vendors like to claim LTFS is a “standard,” but beauty is in the eye of the beholder. It’s a specification at this point, not a standard. That said, we’re already seeing application vendors create functionality to write in an LTFS format based on the specification. And it’s my belief that both customers and the tape storage industry will see the most benefit if we all follow the same path. As such, we have volunteered to lead the way in making LTFS a standard first with the Storage Network Industry Association (SNIA), and eventually through to standard bodies such as American National Standards Institute (ANSI). Expect to hear good news soon about our efforts. So, if storage transportability is one of your requirements, I recommend giving LTFS a look. It makes tape much more user-friendly and it’s free, which allows tape to maintain all of its cost advantages over disk! Note 1 - IDC Report. April, 2011. “IDC’s Archival Storage Solutions Taxonomy, 2011” - Brian Zents Website Newsletter Facebook Twitter

    Read the article

  • NDepend 4 – First Steps

    - by Ricardo Peres
    Introduction Thanks to Patrick Smacchia I had the chance to test NDepend 4. I can only say: awesome! This will be the first of a series of posts on NDepend, where I will talk about my discoveries. Keep in mind that I am just starting to use it, so more experienced users may find these too basic, I just hope I don’t say anything foolish! I must say that I am in no way affiliated with NDepend and I never actually met Patrick. Installation No installation program – a curious decision, I’m not against it -, just unzip the files to a folder and run the executable. It will optionally register itself with Visual Studio 2008, 2010 and 11 as well as RedGate’s Reflector; also, it automatically looks for updates. NDepend can either be used as a stand-alone program (with or without a GUI) or from within Visual Studio or Reflector. Getting Started One thing that really pleases me is the Getting Started section of the stand-alone, with links to pages on NDepend’s web site, featuring detailed explanations, which usually include screenshots and small videos (<5 minutes). There’s also an How do I with hierarchical navigation that guides us to through the major features so that we can easily find what we want. Usage There are two basic ways to use NDepend: Analyze .NET solutions, projects or assemblies; Compare two versions of the same assembly. I have so far not used NDepend to compare assemblies, so I will first talk about the first option. After selecting a solution and some of its projects, it generates a single HTML page with an highly detailed report of the analysis it produced. This includes some metrics such as number of lines of code, IL instructions, comments, types, methods and properties, the calculation of the cyclomatic complexity, coupling and lots of others indicators, typically grouped by type, namespace and assembly. The HTML also includes some nice diagrams depicting assembly dependencies, type and method relative proportions (according to the number of IL instructions, I guess) and assembly analysis relating to abstractness and stability. Useful, I would say. Then there’s the rules; NDepend tests the target assemblies against a set of more than 120 rules, grouped in categories Code Quality, Object Oriented Design, Design, Architecture and Layering, Dead Code, Visibility, Naming Conventions, Source Files Organization and .NET Framework Usage. The full list can be configured on the application, and an explanation of each rule can be found on the web site. Rules can be validated, violated and violated in a critical manner, and the HTML will contain the violated rules, their queries – more on this later - and results. The HTML uses some nice JavaScript effects, which allow paging and sorting of tables, so its nice to use. Similar to the rules, there are some queries that display results for a number (about 200) questions grouped as Object Oriented Design, API Breaking Changes (for assembly version comparison), Code Diff Summary (also for version comparison) and Dead Code. The difference between queries and rules is that queries are not classified as passes, violated or critically violated, just present results. The queries and rules are expressed through CQLinq, which is a very powerful LINQ derivative specific to code analysis. All of the included rules and queries can be enabled or disabled and new ones can be added, with intellisense to help. Besides the HTML report file, the NDepend application can be used to explore all analysis results, compare different versions of analysis reports and to run custom queries. Comparison to Other Analysis Tools Unlike StyleCop, NDepend only works with assemblies, not source code, so you can’t expect it to be able to enforce brackets placement, for example. It is more similar to FxCop, but you don’t have the option to analyze at the IL level, that is, other that the number of IL instructions and the complexity. What’s Next In the next days I’ll continue my exploration with a real-life test case. References The NDepend web site is http://www.ndepend.com/. Patrick keeps an updated blog on http://codebetter.com/patricksmacchia/ and he regularly monitors StackOverflow for questions tagged NDepend, which you can find on http://stackoverflow.com/questions/tagged/ndepend. The default list of CQLinq rules, queries and statistics can be found at http://www.ndepend.com/DefaultRules/webframe.html. The syntax itself is described at http://www.ndepend.com/Doc_CQLinq_Syntax.aspx and its features at http://www.ndepend.com/Doc_CQLinq_Features.aspx.

    Read the article

  • NYC Silverlight FireStarter - June 5th 2010 at the NYC Microsoft Office

    - by Sam Abraham
    On Saturday June 5th, 2010, I spent my Saturday morning at the NYC Silverlight FireStarter. Presenting was Peter Laudati from Microsoft and Jason Beres, Matt Van Horn and Todd Snyder from Infragistics. I watched the Simulcast for the morning sessions as I was tied up with some work, but ended up finally making it to the Microsoft Office and had the opportunity to attend the last hour of the event in person.   For me, the quality of the Simulcast was as good as in-person attendance so far as sound/video quality and the interaction with speakers. In the background was a screen with tweets from remote attendees asking questions or commenting on the presentations. Presenters did periodically stop to answer the tweeted questions as well as questions from attendees. Only thing I missed was getting my hands on some of that swag that was (literally) flying in the air at the event floor.   Upon my arrival at the Microsoft Office Location in NYC, I spoke with Rachel Appel and Peter Laudati asking for permission to take a few photos to record the outstanding effort that took place in putting this event together. Both agreed and I started with putting my photography skills to work.   You can always gauge the quality of an event with the number of its attendees who opt to stay till the last minute as well as the level of interaction of the audience with the speaker. With most of the FireStarter attendees remaining till the very end of the talk, and with the many questions that were asked, one can simply judge the event as a success as per my aforementioned criteria.   Evaluation forms were passed around and Peter strongly encouraged the audience to openly speak their mind as they record their comments. I didn't get to submit my evaluation as I was busy recording the event in photos, so here it goes: I believe that lots of hard work was put into making this event a reality. Quality of speakers, topics and level of Geekiness at the event was outstanding.  Overall, aside from a minor issue with Lunch delivery time, this event was of high quality and I am very sure everyone's evaluation will be in line with my analysis of it being a great success. Below are a few photos of the event.   --Sam Abraham Site Director - West Palm Beach .Net User Group www.Fladotnet.com     NYC Silverlight FireStarter Speakers - From Left to right: Peter Laudati, Todd Snyder, Matt Van Horn & Jason Beres   As jason wasn't quiet visible in the above photo, a closeup was taken (It was Jason's birthday and he had to leave a bit early, so the Infagisticts team thought outside the box...)     Full Room - That was at the last hour of the event   Another view of full room   Discussions during the break   End-of-event Raffle

    Read the article

  • Infinite detail inside Perlin noise procedural mapping

    - by Dave Jellison
    I am very new to game development but I was able to scour the internet to figure out Perlin noise enough to implement a very simple 2D tile infinite procedural world. Here's the question and it's more conceptual than code-based in answer, I think. I understand the concept of "I plug in (x, y) and get back from Perlin noise p" (I'll call it p). P will always be the same value for the same (x, y) (as long as the Perlin algorithm parameters haven't changed, like altering number of octaves, et cetera). What I want to do is be able to zoom into a square and be able to generate smaller squares inside of the already generated overhead tile of terrain. Let's say I have a jungle tile for overhead terrain but I want to zoom in and maybe see a small river tile that would only be a creek and not large enough to be a full "big tile" of water in the overhead. Of course, I want the same net effect as a Perlin equation inside a Perlin equation if that makes sense? (aka. I want two people playing the game with the same settings to get the same terrain and details every time). I can conceptually wrap my head around the large tile being based on an "zoomed out" coordinate leaving enough room to drill into but this approach doesn't make sense in my head (maybe I'm wrong). I'm guessing with this approach my overhead terrain would lose all of the cohesiveness delivered by the Perlin. Imagine I calculate (0, 0) as overhead tile 1 and then to the east of that I plug in (50, 0). OK, great, I now have 49 pixels of detail I could then "drill down" into. The issue I have in my head with this approach (without attempting it) is that there's no guarantee from my Perlin noise that (0,0) would be a good neighbor to (50,0) as they could have wildly different "elevations" or p/resultant values returning from the Perlin equation when I generate the overhead map. I think I can conceive of using the Perlin noise for the overhead tile to then reuse the p value as a seed for the "detail" level of noise once I zoom in. That would ensure my detail Perlin is always the same configuration for (0,0), (1,0), etc. ad nauseam but I'm not sure if there are better approaches out there or if this is a sound approach at all.

    Read the article

  • Using SQL Source Control with Fortress or Vault &ndash; Part 1

    - by AjarnMark
    I am fanatical when it comes to managing the source code for my company.  Everything that we build (in source form) gets put into our source control management system.  And I’m not just talking about the UI and middle-tier code written in C# and ASP.NET, but also the back-end database stuff, which at times has been a pain.  We even script out our Scheduled Jobs and keep a copy of those under source control. The UI and middle-tier stuff has long been easy to manage as we mostly use Visual Studio which has integration with source control systems built in.  But the SQL code has been a little harder to deal with.  I have been doing this for many years, well before Microsoft came up with Data Dude, so I had already established a methodology that, while not as smooth as VS, nonetheless let me keep things well controlled, and allowed doing my database development in my tool of choice, Query Analyzer in days gone by, and now SQL Server Management Studio.  It just makes sense to me that if I’m going to do database development, let’s use the database tool set.  (Although, I have to admit I was pretty impressed with the demo of Juneau that Don Box did at the PASS Summit this year.)  So as I was saying, I had developed a methodology that worked well for us (and I’ll probably outline in a future post) but it could use some improvement. When Solutions and Projects were first introduced in SQL Management Studio, I thought we were finally going to get our same experience that we have in Visual Studio.  Well, let’s say I was underwhelmed by Version 1 in SQL 2005, and apparently so were enough other people that by the time SQL 2008 came out, Microsoft decided that Solutions and Projects would be deprecated and completely removed from a future version.  So much for that idea. Then I came across SQL Source Control from Red-Gate.  I have used several tools from Red-Gate in the past, including my favorites SQL Compare, SQL Prompt, and SQL Refactor.  SQL Prompt is worth its weight in gold, and the others are great, too.  Earlier this year, we upgraded from our earlier product bundles to the new Developer Bundle, and in the process added SQL Source Control to our collection.  I thought this might really be the golden ticket I was looking for.  But my hopes were quickly dashed when I discovered that it only integrated with Microsoft Team Foundation Server and Subversion as the source code repositories.  We have been using SourceGear’s Vault and Fortress products for years, and I wholeheartedly endorse them.  So I was out of luck for the time being, although there were a number of people voting for Vault/Fortress support on their feedback forum (as did I) so I had hope that maybe next year I could look at it again. But just a couple of weeks ago, I was pleasantly surprised to receive notice in my email that Red-Gate had an Early Access version of SQL Source Control that worked with Vault and Fortress, so I quickly downloaded it and have been putting it through its paces.  So far, I really like what I see, and I have been quite impressed with Red-Gate’s responsiveness when I have contacted them with any issues or concerns that I have had.  I have had several communications with Gyorgy Pocsi at Red-Gate and he has been immensely helpful and responsive. I must say that development with SQL Source Control is very different from what I have been used to.  This post is getting long enough, so I’ll save some of the details for a separate write-up, but the short story is that in my regular mode, it’s all about the script files.  Script files are King and you dare not make a change to the database other than by way of a script file, or you are in deep trouble.  With SQL Source Control, you make your changes to your development database however you like.  I still prefer writing most of my changes in T-SQL, but you can also use any of the GUI functionality of SSMS to make your changes, and SQL Source Control “manages” the script for you.  Basically, when you first link your database to source control, the tool generates scripts for every primary object (tables and their indexes are together in one script, not broken out into separate scripts like DB Projects do) and those scripts are checked into your source control.  So, if you needed to, you could still do a GET from your source control repository and build the database from scratch.  But for the day-to-day work, SQL Source Control uses the same technique as SQL Compare to determine what changes have been made to your development database and how to represent those in your repository scripts.  I think that once I retrain myself to just work in the database and quit worrying about having to find and open the right script file, that this will actually make us more efficient. And for deployment purposes, SQL Source Control integrates with the full SQL Compare utility to produce a synchronization script (or do a live sync).  This is similar in concept to Microsoft’s DACPAC, if you’re familiar with that. If you are not currently keeping your database development efforts under source control, definitely examine this tool.  If you already have a methodology that is working for you, then I still think this is worth a review and comparison to your current approach.  You may find it more efficient.  But remember that the version which integrates with Vault/Fortress is still in pre-release mode, so treat it with a little caution.  I have found it to be fairly stable, but there was one bug that I found which had inconvenient side-effects and could have really been frustrating if I had been running this on my normal active development machine.  However, I can verify that that bug has been fixed in a more recent build version (did I mention Red-Gate’s responsiveness?).

    Read the article

  • Getting Internet Explorer to Open Different Sets of Tabs Based on the Day of the Week

    - by Akemi Iwaya
    If you have to use Internet Explorer for work and need to open a different set of work-specific tabs every day, is there a quick and easy way to do it instead of opening each one individually? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader bobSmith1432 is looking for a quick and easy way to open different daily sets of tabs in Internet Explorer for his work: When I open Internet Explorer on different days of the week, I want different tabs to be opened automatically. I have to run different reports for work each day of the week and it takes a lot of time to open the 5-10 tabs I use to run the reports. It would be a lot faster if, when I open Internet Explorer, the tabs I needed would automatically load and be ready. Is there a way to open 5-10 different tabs in Internet Explorer depending on the day of the week? Example: Monday – 6 Accounting Pages Tuesday – 7 Billing Pages Wednesday – 5 HR Pages Thursday – 10 Schedule Pages Friday – 8 Work Summary/Order Pages Is there an easier way for Bob to get all those tabs to load and be ready to go each day instead of opening them individually every time? The Answer SuperUser contributor Julian Knight has a simple, non-script solution for us: Rather than trying the brute force method, how about a work around? Open up each set of tabs either in different windows, or one set at a time, and save all tabs to bookmark folders. Put the folders on the bookmark toolbar for ease of access. Each day, right-click on the appropriate folder and click on ‘Open in tab group’ to open all the tabs. You could put all the day folders into a top-level folder to save space if you want, but at the expense of an extra click to get to them. If you really must go further, you need to write a program or script to drive Internet Explorer. The easiest way is probably writing a PowerShell script. Special Note: There are various scripts shared on the discussion page as well, so the solution shown above is just one possibility out of many. If you love the idea of using scripts for a function like this, then make sure to browse on over to the discussion page to see the various ones SuperUser members have shared! Have something to add to the explanation? Sound off in the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.

    Read the article

  • ORA-7445 Troubleshooting

    - by [email protected]
        QUICKLINK: Note 153788.1 ORA-600/ORA-7445 Lookup tool Note 1082674.1 : A Video To Demonstrate The Usage Of The ORA-600/ORA-7445 Lookup Tool [Video]   Have you observed an ORA-07445 error reported in your alert log? While the ORA-600 error is "captured" as a handled exception in the Oracle source code, the ORA-7445 is an unhandled exception error due to an OS exception which should result in the creation of a core file.  An ORA-7445 is a generic error, and can occur from anywhere in the Oracle code. The precise location of the error is identified by the core file and/or trace file it produces.  Looking for the best way to diagnose? Whenever an ORA-7445 error is raised a core file is generated.  There may be a trace file generated with the error as well.   Prior to 11g, the core files are located in the CORE_DUMP_DEST directory.   Starting with 11g, there is a new advanced fault diagnosability infrastructure to manage trace data.  Diagnostic files are written into a root directory for all diagnostic data called the ADR home.   Core files at 11g will go to the ADR HOME/cdump directory.   For more information on the Oracle 11g Diagnosability feature see Note 453125.1 11g Diagnosability Frequently Asked Questions Note 443529.1 11g Quick Steps to Package and Send Critical Error Diagnostic Information to Support[Video]   NOTE:  While the core file is captured in the Diagnosability infrastructure, the file may not be included with a diagnostic package.1.  Check the Alert LogThe alert log may indicate additional errors or other internal errors at the time of the problem.   In some cases, the ORA-7445 error will occur along with ORA-600, ORA-3113, ORA-4030 errors.  The ORA-7445 error can be side effects of the other problems and you should review the first error and associated core file or trace file and work down the list of errors.   Note 1020463.6 DIAGNOSING ORA-3113 ERRORS Note 1812.1 TECH:  Getting a Stack Trace from a CORE file Note 414966.1 RDA Documentation Index   If the ORA-7445 errors are not associated with other error conditions, ensure the trace data is not truncated. If you see a message at the end of the file   "MAX DUMP FILE SIZE EXCEEDED"   the MAX_DUMP_FILE_SIZE parameter is not setup high enough or to 'unlimited'. There could be vital diagnostic information missing in the file and discovering the root issue may be very difficult.  Set the MAX_DUMP_FILE_SIZE appropriately and regenerate the error for complete trace information. For pointers on deeper analysis of these errors see   Note 390293.1 Introduction to 600/7445 Internal Error Analysis Note 211909.1 Customer Introduction to ORA-7445 Errors 2.  Search 600/7445 Lookup Tool Visit My Oracle Support to access the ORA-00600 Lookup tool (Note 153788.1). The ORA-600/ORA-7445 Lookup tool may lead you to applicable content in My Oracle Support on the problem and can be used to investigate the problem with argument data from the error message or you can pull out key stack pointers from the associated trace file to match up against known bugs.3.  "Fine tune" searches in Knowledge Base As the ORA-7445 error indicates an unhandled exception in the Oracle source code, your search in the Oracle Knowledge Base will need to focus on the stack data from the core file or the trace file. Keep in mind that searches on generic argument data will bring back a large result set.  The more you can learn about the environment and code leading to the errors, the easier it will be to narrow the hit list to match your problem. Note 153788.1 ORA-600/ORA-7445 TroubleshooterNote 1082674.1 A Video To Demonstrate The Usage Of The ORA-600/ORA-7445 Lookup Tool [Video] NOTE:  If no trace file is captured, see Note 1812.1 TECH:  Getting a Stack Trace from a CORE file.  Core files are managed through 11g Diagnosability, but are not packaged with other diagnostic data automatically.  The core files can be quite large, but may be useful during analysis within Oracle Support.4.  If assistance is required from Oracle Should it become necessary to get assistance from Oracle Support on an ORA-7445 problem, please provide at a minimum, the Alert log  Associated tracefile(s) or incident package at 11g Patch level  information Core file(s)  Information about changes in configuration and/or application prior to  issues  If error is reproducible, a self-contained reproducible testcase: Note.232963.1 How to Build a Testcase for Oracle Data Server Support to Reproduce ORA-600 and ORA-7445 Errors RDA report or Oracle Configuration Manager information Oracle Configuration Manager Quick Start Guide Note 548815.1 My Oracle Support Configuration Management FAQ Note 414966.1 RDA Documentation Index ***For reference to the content in this blog, refer to Note.1092832.1 Master Note for Diagnosing ORA-600

    Read the article

  • A few questions about integrating AudioKinetic Wwise and Unity

    - by SaldaVonSchwartz
    I'm new to Wwise and to using it with Unity, and though I have gotten the integration to work, I'm still dealing with some loose ends and have a few questions: (I'm on Unity 4.3 as of now but I think it shouldn't make any difference) The base path: The Wwise documentation implies you set this in the AkGlobalSoundEngineInitializer basePath public ivar, which is exposed to the editor. However, I found that this variable is not really used. Instead, the path is hardcoded to /Audio/GeneratedSoundBanks in AkBankPath. I had to modify both scripts to actually look in the path that I set in the editor property. What's the deal with this? Just sloppyness or am I missing something? Also about paths: since I'm on Mac, I'm using Unity natively under OS X and in tadem, the Wwise authoring tool via VMWare and I share the OS X Unity project folder so I can generate the soundbanks into the assets folder. However, the authoring tool (downloaded the latest one for Windows) doesn't automatically generate any "platform-specific" subfolders for my wwise files. That is, again, the Unity integration scripts assume the path to be /Audio/GeneratedSoundBanks/<my-platform>/ which in my case would be Mac (I set the authoring tool to generate for Mac). The documentation says wwise will automatically generate the platform-specific folders but it just dumps all the stuff in GeneratedSoundBanks. Am I missing some setting? cause right now I just manually create the /Mac folder. The C# methods AkSoundEngine.PostEvent and AkSoundEngine.LoadBank for instance, have a few overloads, including ones where I can refer to the soundbanks or events by their ID. However, if I try to use these, for instance: AkSoundEngine.LoadBank(, AkSoundEngine.AK_DEFAULT_POOL_ID) where the int I got from the .h header, I get Ak_Fail. If I use the overloads that reference the objects by string name then it works. What gives? Converting the ID header to C#: The integration comes with a C# script that seems to fork a process to call Python in turn to covert the C++ header into a C# script. This always fails unless I manually execute the Python script myself from outside Unity. Might be a permissions thing, but has anyone experienced this? The Profiler: I set up the Unity player to run in the background and am using the "Profile" version of the plugin. However, when I start the Unity OS X standalone app, the profiler in VMWare does not see it. This I'm thinking might just be that I'm trying to see a running instance of the sound engine inside an OS X binary from a Windows virtual machine. But I'm just wondering if anyone has gotten the Windows profiler to see an OS X Unity binary. Different versions of the integration plugin: It's not clear to me from the documentation whether I have to manually (or write a script to do it) remove the "Profile" version and install the "Release" version when I'm going to do a Release build or if I should install both version in Unity and it'll select the right one. Thanks!

    Read the article

  • How to design a scalable notification system?

    - by Trent
    I need to write a notification system manager. Here is my requirements: I need to be able to send a Notification on different platforms, which may be totally different (for exemple, I need to be able to send either an SMS or an E-mail). Sometimes the notification may be the same for all recipients for a given platform, but sometimes it may be a notification per recipients (or several) per platform. Each notification can contain platform specific payload (for exemple an MMS can contains a sound or an image). The system need to be scalable, I need to be able to send a very large amount of notification without crashing either the application or the server. It is a two step process, first a customer may type a message and choose a platform to send to, and the notification(s) should be created to be processed either real-time either later. Then the system needs to send the notification to the platform provider. For now, I end up with some though but I don't know how scalable it will be or if it is a good design. I've though of the following objects (in a pseudo language): a generic Notification object: class Notification { String $message; Payload $payload; Collection<Recipient> $recipients; } The problem with the following objects is what if I've 1.000.000 recipients ? Even if the Recipient object is very small, it'll take too much memory. I could also create one Notification per recipient, but some platform providers requires me to send it in batch, meaning I need to define one Notification with several Recipients. Each created notification could be stored in a persistent storage like a DB or Redis. Would it be a good it to aggregate this later to make sure it is scalable? On the second step, I need to process this notification. But how could I distinguish the notification to the right platform provider? Should I use an object like MMSNotification extending an abstract Notification? or something like Notification.setType('MMS')? To allow to process a lot of notification at the same time, I think a messaging queue system like RabbitMQ may be the right tool. Is it? It would allow me to queue a lot of notification and have several worker to pop notification and process them. But what if I need to batch the recipients as seen above? Then I imagine a NotificationProcessor object for which I could I add NotificationHandler each NotificationHandler would be in charge to connect the platform provider and perform notification. I can also use an EventManager to allow pluggable behavior. Any feedbacks or ideas? Thanks for giving your time. Note: I'm used to work in PHP and it is likely the language of my choice.

    Read the article

  • Interviews: Going Beyond the Technical Quiz

    - by Tony Davis
    All developers will be familiar with the basic format of a technical interview. After a bout of CV-trawling to gauge basic experience, strengths and weaknesses, the interview turns technical. The whiteboard takes center stage and the challenge is set to design a function or query, or solve what on the face of it might seem a disarmingly simple programming puzzle. Most developers will have experienced those few panic-stricken moments, when one’s mind goes as blank as the whiteboard, before un-popping the marker pen, and hopefully one’s mental functions, to work through the problem. It is a way to probe the candidate’s knowledge of basic programming structures and techniques and to challenge their critical thinking. However, these challenges or puzzles, often devised by some of the smartest brains in the development team, have a tendency to become unnecessarily ‘tricksy’. They often seem somewhat academic in nature. While the candidate straight out of IT school might breeze through the construction of a Markov chain, a candidate with bags of practical experience but less in the way of formal training could become nonplussed. Also, a whiteboard and a marker pen make up only a very small part of the toolkit that a programmer will use in everyday work. I remember vividly my first job interview, for a position as technical editor. It went well, but after the usual CV grilling and technical questions, I was only halfway there. Later, they sat me alongside a team of editors, in front of a computer loaded with MS Word and copy of SQL Server Query Analyzer, and my task was to edit a real chapter for a real SQL Server book that they planned to publish, including validating and testing all the code. It was a tough challenge but I came away with a sound knowledge of the sort of work I’d do, and its context. It makes perfect sense, yet my impression is that many organizations don’t do this. Indeed, it is only relatively recently that Red Gate started to move over to this model for developer interviews. Now, instead of, or perhaps in addition to, the whiteboard challenges, the candidate can expect to sit with their prospective team, in front of Visual Studio, loaded with all the useful tools in the developer’s kit (ReSharper and so on) and asked to, for example, analyze and improve a real piece of software. The same principles should apply when interviewing for a database positon. In addition to the usual questions challenging the candidate’s knowledge of such things as b-trees, object permissions, database recovery models, and so on, sit the candidate down with the other database developers or DBAs. Arm them with a copy of Management Studio, and a few other tools, then challenge them to discover the flaws in a stored procedure, and improve its performance. Or present them with a corrupt database and ask them to get the database back online, and discover the cause of the corruption.

    Read the article

  • First time shadow mapping problems

    - by user1294203
    I have implemented basic shadow mapping for the first time in OpenGL using shaders and I'm facing some problems. Below you can see an example of my rendered scene: The process of the shadow mapping I'm following is that I render the scene to the framebuffer using a View Matrix from the light point of view and the projection and model matrices used for normal rendering. In the second pass, I send the above MVP matrix from the light point of view to the vertex shader which transforms the position to light space. The fragment shader does the perspective divide and changes the position to texture coordinates. Here is my vertex shader, #version 150 core uniform mat4 ModelViewMatrix; uniform mat3 NormalMatrix; uniform mat4 MVPMatrix; uniform mat4 lightMVP; uniform float scale; in vec3 in_Position; in vec3 in_Normal; in vec2 in_TexCoord; smooth out vec3 pass_Normal; smooth out vec3 pass_Position; smooth out vec2 TexCoord; smooth out vec4 lightspace_Position; void main(void){ pass_Normal = NormalMatrix * in_Normal; pass_Position = (ModelViewMatrix * vec4(scale * in_Position, 1.0)).xyz; lightspace_Position = lightMVP * vec4(scale * in_Position, 1.0); TexCoord = in_TexCoord; gl_Position = MVPMatrix * vec4(scale * in_Position, 1.0); } And my fragment shader, #version 150 core struct Light{ vec3 direction; }; uniform Light light; uniform sampler2D inSampler; uniform sampler2D inShadowMap; smooth in vec3 pass_Normal; smooth in vec3 pass_Position; smooth in vec2 TexCoord; smooth in vec4 lightspace_Position; out vec4 out_Color; float CalcShadowFactor(vec4 lightspace_Position){ vec3 ProjectionCoords = lightspace_Position.xyz / lightspace_Position.w; vec2 UVCoords; UVCoords.x = 0.5 * ProjectionCoords.x + 0.5; UVCoords.y = 0.5 * ProjectionCoords.y + 0.5; float Depth = texture(inShadowMap, UVCoords).x; if(Depth < (ProjectionCoords.z + 0.001)) return 0.5; else return 1.0; } void main(void){ vec3 Normal = normalize(pass_Normal); vec3 light_Direction = -normalize(light.direction); vec3 camera_Direction = normalize(-pass_Position); vec3 half_vector = normalize(camera_Direction + light_Direction); float diffuse = max(0.2, dot(Normal, light_Direction)); vec3 temp_Color = diffuse * vec3(1.0); float specular = max( 0.0, dot( Normal, half_vector) ); float shadowFactor = CalcShadowFactor(lightspace_Position); if(diffuse != 0 && shadowFactor > 0.5){ float fspecular = pow(specular, 128.0); temp_Color += fspecular; } out_Color = vec4(shadowFactor * texture(inSampler, TexCoord).xyz * temp_Color, 1.0); } One of the problems is self shadowing as you can see in the picture, the crate has its own shadow cast on itself. What I have tried is enabling polygon offset (i.e. glEnable(POLYGON_OFFSET_FILL), glPolygonOffset(GLfloat, GLfloat) ) but it didn't change much. As you see in the fragment shader, I have put a static offset value of 0.001 but I have to change the value depending on the distance of the light to get more desirable effects , which not very handy. I also tried using front face culling when I render to the framebuffer, that didn't change much too. The other problem is that pixels outside the Light's view frustum get shaded. The only object that is supposed to be able to cast shadows is the crate. I guess I should pick more appropriate projection and view matrices, but I'm not sure how to do that. What are some common practices, should I pick an orthographic projection? From googling around a bit, I understand that these issues are not that trivial. Does anyone have any easy to implement solutions to these problems. Could you give me some additional tips? Please ask me if you need more information on my code. Here is a comparison with and without shadow mapping of a close-up of the crate. The self-shadowing is more visible.

    Read the article

  • permanently load module

    - by Radu
    I have a Compaq Presario CQ-61 320SQ, I am using Ubuntu 10.04 because after update to 10.10 my mouse and touchpad won't work, network won't work, sound won't work ... (I managed to fix most of them after almost a month of googling, but not all, my 2 Desktops have no problem with 10.10) so I decided to switch back to 10.04, where I have a problem: My broadband speed is very low beacause of the kernel module r8169, I downloaded the good module r8101 and every time the computer boots have a rc.local entry to fix this. Question: Can I load the modul permanently from a specific location. I heard about /etc/modules but there I need the module name, but I have to load it from a specific path (where is the default path for that) Thank you. So I studied the script: It creates the file r8101.ko in /lib/modules/uname -r/kernel/drivers/net so I think as long as nobody will delete that file, and I don't update the kernel, maybe adding r8108 to /etc/modules will work, and add r8169 to blacklist ... I will give it a try. EDIT2: So I added r8101 to /etc/modules and blacklist r8169 to /etc/modprobe.d/blacklist.conf It still uses the old module, lsmod prints: radu@adu:~$ lsmod | grep r8 r8101 67626 0 r8169 34108 0 mii 4381 1 r8169 EDIT: the module is loaded using this script that came with it: #!/bin/sh # invoke insmod with all arguments we got # and use a pathname, as insmod doesn't look in . by default TARGET_PATH=/lib/modules/`uname -r`/kernel/drivers/net echo echo "Check old driver and unload it." check=`lsmod | grep r8169` if [ "$check" != "" ]; then echo "rmmod r8169" /sbin/rmmod r8169 fi check=`lsmod | grep r8101` if [ "$check" != "" ]; then echo "rmmod r8101" /sbin/rmmod r8101 fi echo "Build the module and install" echo "-------------------------------" >> log.txt date 1>>log.txt make all 1>>log.txt || exit 1 module=`ls src/*.ko` module=${module#src/} module=${module%.ko} if [ "$module" == "" ]; then echo "No driver exists!!!" exit 1 elif [ "$module" != "r8169" ]; then if test -e $TARGET_PATH/r8169.ko ; then echo "Backup r8169.ko" if test -e $TARGET_PATH/r8169.bak ; then i=0 while test -e $TARGET_PATH/r8169.bak$i do i=$(($i+1)) done echo "rename r8169.ko to r8169.bak$i" mv $TARGET_PATH/r8169.ko $TARGET_PATH/r8169.bak$i else echo "rename r8169.ko to r8169.bak" mv $TARGET_PATH/r8169.ko $TARGET_PATH/r8169.bak fi fi fi echo "Depending module. Please wait." depmod -a echo "load module $module" modprobe $module echo "Completed." exit 0

    Read the article

  • The importance of Unit Testing in BI

    - by Davide Mauri
    One of the main steps in the process we internally use to develop a BI solution is the implementation of Unit Test of you BI Data. As you may already know, I’ve create a simple (for now) tool that leverages NUnit to allow us to quickly create Unit Testing without having to resort to use Visual Studio Database Professional: http://queryunit.codeplex.com/ Once you have a tool like this one, you can start also to make sure that your BI solution (DWH and CUBE) is not only structurally sound (I mean, the cube or the report gets processed correctly), but you can also check that the logical integrity of your business rules is enforced. For example let’s say that the customer tell you that they will never create an invoice for a specific product-line in 2010 since that product-line is dismissed and will never be sold again. Ok we know that this in theory is true, but a lot of this business rule effectiveness depends on the fact the people does not do a mistake while inserting new orders/invoices and the ERP used implements a check for this business logic. Unfortunately these last two hypotesis are not always true, so you may find yourself really having some invoices for a product line that doesn’t exists anymore. Maybe this kind of situation in future will be solved using Master Data Management but, meanwhile, how you can give and idea of the data quality to your customers? How can you check that logical integrity of the analytical data you produce is exactly what you expect? Well, Unit Testing of a DWH or a CUBE can be a solution. Once you have defined your test suite, by writing SQL and MDX queries that checks that your data is what you expect to be, if you use NUnit (and QueryUnit does), you can then use a tool like NUnit2Report to create a nice HTML report that can be shipped via email to give information of data quality: In addition to that, since NUnit produces an XML file as a result, you can also import it into a SQL Server Database and then monitor the quality of data over time. I’ll be speaking about this approach (and more in general about how to “engineer” a BI solution) at the next European SQL PASS Adaptive BI Best Practices http://www.sqlpass.org/summit/eu2010/Agenda/ProgramSessions/AdaptiveBIBestPratices.aspx I’ll enjoy discussing with you all about this, so see you there! And remember: “if ain't tested it's broken!” (Sorry I don’t remember how said that in first place :-)) Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • How to implement an email unsubscribe system for a site with many kinds of emails?

    - by Mike Liu
    I'm working on a website that features many different types of emails. Users have accounts, and when logged in they have access to a setting page that they can use to customize what types of emails they receive. However, I'd like to also give users an easy way to unsubscribe directly in the emails they receive. I've looked into list unsubscribe headers as well as creating some type of one click link that would unsubscribe a user from that type of email without requiring login or further action. The later would probably require me to break convention and make changes to the database in response to a GET on the link. However, am I incorrect in thinking that either of these would require me to generate and permanently store a unique identifier in my database for every email I ever send, really complicating email delivery? Without that, I'm not sure how I would be able to uniquely identify a user and a type of email in order to change their email preferences, and this identifier would need to be stored forever as a user could have an email sitting in their inbox for a long time before they decide to act on it. Alternatively, I was considering having a no-login page for managing email preferences. In contrast to above where I would need one of these identifiers for each email, this would only need one identifier per user, with no generation or other action required on sending an email. All of these raise security issues, and they could potentially be used by people to tamper with others' email preferences. This could be mitigated somewhat by ensuring that the identifier is really difficult to guess. For the once per user identifier approach, I was considering generating the identifier by passing a user's ID through some type of encryption algorithm, is this a sound approach? For the per-email identifiers, perhaps I could use a user's ID appended to the time. However, even this would not eliminate the problem entirely, as this would really just be security through obscurity, and anyone with the URL could tamper, and in the end the main defense would have to be that most people aren't so bored as to tamper with other people's email preferences. Are there any other alternatives I've missed, or issues or solutions with these that anyone can provide insight on? What are best practices in this area?

    Read the article

  • IE9 not rendering box-shadow Elements inside of Table Cells

    - by Rick Strahl
    Ran into an annoying problem today with IE 9. Slowly updating some older sites with CSS 3 tags and for the most part IE9 does a reasonably decent job of working with the new CSS 3 features. Not all by a long shot but at least some of the more useful ones like border-radius and box-shadow are supported. Until today I was happy to see that IE supported box-shadow just fine, but I ran into a problem with some old markup that uses tables for its main layout sections. I found that inside of a table cell IE fails to render a box-shadow. Below are images from Chrome (left) and IE 9 (right) of the same content: The download and purchase images are rendered with: <a href="download.asp" style="display:block;margin: 10px;"><img src="../images/download.gif" class="boxshadow roundbox" /></a> where the .boxshadow and .roundbox styles look like this:.boxshadow { -moz-box-shadow: 3px 3px 5px #535353; -webkit-box-shadow: 3px 3px 5px #535353; box-shadow: 3px 3px 5px #535353; } .roundbox { -moz-border-radius: 6px 6px 6px 6px; -webkit-border-radius: 6px; border-radius: 6px 6px 6px 6px; } And the Problem is… collapsed Table Borders Now normally these two styles work just fine in IE 9 when applied to elements. But the box-shadow doesn't work inside of this markup - because the parent container is a table cell.<td class="sidebar" style="border-collapse: collapse"> … <a href="download.asp" style="display:block;margin: 10px;"><img src="../images/download.gif" class="boxshadow roundbox" /></a> …</td> This HTML causes the image to not show a shadow. In actuality I'm not styling inline, but as part of my browser Reset I have the following in my master .css file:table { border-collapse: collapse; border-spacing: 0; } which has the same effect as the inline style. border-collapse by default inherits from the parent and so the TD inherits from table and tr - so TD tags are effectively collapsed. You can check out a test document that demonstrates this behavior here in this CodePaste.net snippet or run it here. How to work around this Issue To get IE9 to render the shadows inside of the TD tag correctly, I can just change the style explicitly NOT to use border-collapse:<td class="sidebar" style="border-collapse: separate; border-width: 0;"> Or better yet (thanks to David's comment below), you can add the border-collapse: separate to the .boxshadow style like this:.boxshadow { -moz-box-shadow: 3px 3px 5px #535353; -webkit-box-shadow: 3px 3px 5px #535353; box-shadow: 3px 3px 5px #535353; border-collapse: separate; } With either of these approaches IE renders the shadows correctly. Do you really need border-collapse? Should you bother with border-collapse? I think so! Collapsed borders render flat as a single fat line if a border-width and border-color are assigned, while separated borders render a thin line with a bunch of weird white space around it or worse render a old skool 3D raised border which is terribly ugly as well. So as a matter of course in any app my browser Reset includes the above code to make sure all tables with borders render the same flat borders. As you probably know, IE has all sorts of rendering issues in tables and on backgrounds (opacity backgrounds or image backgrounds) most of which is caused by the way that IE internally uses ActiveX filters to apply these effects. Apparently collapsed borders are yet one more item that causes problems with rendering. There you have it. Another crappy failure in IE we have to check for now, just one more reason to hate Internet Explorer. Luckily this one has a reasonably easy workaround. I hope this helps out somebody and saves them the hour I spent trying to figure out what caused this problem in the first place. Resources Sample HTML document that demonstrates the behavior Run the Sample© Rick Strahl, West Wind Technologies, 2005-2012Posted in HTML  Internet Explorer   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • A Slice of Raspberry Pi

    - by Phil Factor
    Guest editorial for the ITPro/SysAdmin newsletter The Raspberry Pi Foundation has done a superb design job on their new $35 network-enabled Linux computer. This tiny machine, incorporating an ARM processor on a Broadcom BCM2835 multimedia chip, aims to put the fun back into learning computing. The public response has been overwhelmingly positive.Note that aim: "…to put the fun back". Education in Information Technology is in dire straits. It always has been, but seems to have deteriorated further still, even in the face of improved provision of equipment.In many countries, the government controls the curriculum. It predicted a shortage in office-based IT skills, and so geared the ICT curriculum toward mind-numbing training in word-processing and spreadsheet skills. Instead, the shortage has turned out to be in people with an engineering-mindset, who can solve problems with whatever technologies are available and learn new techniques quickly, in a rapidly-changing field.In retrospect, the assumption that specific training was required rather than an education was an idiotic response to the arrival of mainstream information technology. As a result, ICT became a disaster area, which discouraged a generation of youngsters from a career in IT, and thereby led directly to the shortage of people with the skills that are required to exploit the potential of Information Technology..Raspberry Pi aims to reverse the trend. This is a rig that is geared to fast graphics in high resolution. It is no toy. It should be a superb games machine. However, the use of Fedora, Debian, or Arch Linux ARM shows the more serious educational intent behind the Foundation's work. It looks like it will even do some office work too!So, get hold of any power supply that provides a 5VDC source at the required 700mA; an old Blackberry charger will do or, alternatively, it will run off four AA cells. You'll need a USB hub to support the mouse and keyboard, and maybe a hard drive. You'll want a DVI monitor (with audio out) or TV (sound and video). You'll also need to be able to cope with wired Ethernet 10/100, if you want networking.With this lot assembled, stick the paraphernalia on the back of the HDTV with Blu Tack, get a nice keyboard, and you have a classy Linux-based home computer. The major cost is in the T.V and the keyboard. If you're not already writing software for this platform, then maybe, at a time when some countries are talking of orders in the millions, you should consider it.

    Read the article

  • Beat detection and FFT

    - by Quincy
    So I am working on a platformer game which includes music with beat detection. I am currently using a simple if the energy that is stored in the history buffer is smaller then the current energy there is a beat. The problem with this is that ofcourse if you use songs like rock songs where you have a pretty steady amplitude this isn't going to work. So I looked further and found algorithms splitting the sound into multiple bands using FFT. I then found this : http://en.literateprograms.org/Cooley-Tukey_FFT_algorithm_(C) The only problem I'm having is that I am quite new to audio and I have no idea how to use that to split the signal up into multiple signals. So my question is : How do you use a FFT to split a signal into multiple bands ? Also for the guys interested, this is my algorithm in c# : // C = threshold, N = size of history buffer / 1024 public void PlaceBeatMarkers(float C, int N) { List<float> instantEnergyList = new List<float>(); short[] samples = soundData.Samples; float timePerSample = 1 / (float)soundData.SampleRate; int sampleIndex = 0; int nextSamples = 1024; // Calculate instant energy for every 1024 samples. while (sampleIndex + nextSamples < samples.Length) { float instantEnergy = 0; for (int i = 0; i < nextSamples; i++) { instantEnergy += Math.Abs((float)samples[sampleIndex + i]); } instantEnergy /= nextSamples; instantEnergyList.Add(instantEnergy); if(sampleIndex + nextSamples >= samples.Length) nextSamples = samples.Length - sampleIndex - 1; sampleIndex += nextSamples; } int index = N; int numInBuffer = index; float historyBuffer = 0; //Fill the history buffer with n * instant energy for (int i = 0; i < index; i++) { historyBuffer += instantEnergyList[i]; } // If instantEnergy / samples in buffer < instantEnergy for the next sample then add beatmarker. while (index + 1 < instantEnergyList.Count) { if(instantEnergyList[index + 1] > (historyBuffer / numInBuffer) * C) beatMarkers.Add((index + 1) * 1024 * timePerSample); historyBuffer -= instantEnergyList[index - numInBuffer]; historyBuffer += instantEnergyList[index + 1]; index++; } }

    Read the article

  • SSIS: Building SQL databases on-the-fly using concatenated SQL scripts

    - by DrJohn
    Over the years I have developed many techniques which help automate the whole SQL Server build process. In my current process, where I need to build entire OLAP data marts on-the-fly, I make regular use of a simple but very effective mechanism to concatenate all the SQL Scripts together from my SSMS (SQL Server Management Studio) projects. This proves invaluable because in two clicks I can redeploy an entire SQL Server database with all tables, views, stored procedures etc. Indeed, I can also use the concatenated SQL scripts with SSIS to build SQL Server databases on-the-fly. You may be surprised to learn that I often redeploy the database several times per day, or even several times per hour, during the development process. This is because the deployment errors are logged and you can quickly see where SQL Scripts have object dependency errors. For example, after changing a table structure you may have forgotten to change any related views. The deployment log immediately points out all the objects which failed to build so you can fix and redeploy the database very quickly. The alternative approach (i.e. doing changes in the database directly using the SSMS UI) would require you to check all dependent objects before making changes. The chances are that you will miss something and wonder why your app returns the wrong data – a common problem caused by changing a table without re-creating dependent views. Using SQL Projects in SSMS A great many developers fail to make use of SQL Projects in SSMS (SQL Server Management Studio). To me they are invaluable way of organizing your SQL Scripts. The screenshot below shows a typical SSMS solution made up of several projects – one project for tables, another for views etc. The key point is that the projects naturally fall into the right order in file system because of the project name. The number in the folder or file name ensures that the projects the SQL scripts are concatenated together in the order that they need to be executed. Hence the script filenames start with 100, 110 etc. Concatenating SQL Scripts To concatenate the SQL Scripts together into one file, I use notepad.exe to create a simple batch file (see example screenshot) which uses the TYPE command to write the content of the SQL Script files into a combined file. As the SQL Scripts are in several folders, I simply use several TYPE command multiple times and append the output together. If you are unfamiliar with batch files, you may not know that the angled bracket (>) means write output of the program into a file. Two angled brackets (>>) means append output of this program into a file. So the command-line DIR > filelist.txt would write the content of the DIR command into a file called filelist.txt. In the example shown above, the concatenated file is called SB_DDS.sql If, like me you place the concatenated file under source code control, then the source code control system will change the file's attribute to "read-only" which in turn would cause the TYPE command to fail. The ATTRIB command can be used to remove the read-only flag. Using SQLCmd to execute the concatenated file Now that the SQL Scripts are all in one big file, we can execute the script against a database using SQLCmd using another batch file as shown below: SQLCmd has numerous options, but the script shown above simply executes the SS_DDS.sql file against the SB_DDS_DB database on the local machine and logs the errors to a file called SB_DDS.log. So after executing the batch file you can simply check the error log to see if your database built without a hitch. If you have errors, then simply fix the source files, re-create the concatenated file and re-run the SQLCmd to rebuild the database. This two click operation allows you to quickly identify and fix errors in your entire database definition.Using SSIS to execute the concatenated file To execute the concatenated SQL script using SSIS, you simply drop an Execute SQL task into your package and set the database connection as normal and then select File Connection as the SQLSourceType (as shown below). Create a file connection to your concatenated SQL script and you are ready to go.   Tips and TricksAdd a new-line at end of every fileThe most common problem encountered with this approach is that the GO statement on the last line of one file is placed on the same line as the comment at the top of the next file by the TYPE command. The easy fix to this is to ensure all your files have a new-line at the end.Remove all USE database statementsThe SQLCmd identifies which database the script should be run against.  So you should remove all USE database commands from your scripts - otherwise you may get unintentional side effects!!Do the Create Database separatelyIf you are using SSIS to create the database as well as create the objects and populate the database, then invoke the CREATE DATABASE command against the master database using a separate package before calling the package that executes the concatenated SQL script.    

    Read the article

< Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >