Search Results

Search found 13401 results on 537 pages for 'double checked'.

Page 481/537 | < Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >

  • 'Make' command compiling errors

    - by G_T
    Im trying to locally install a program which is written in C++. I have downloaded the program and am attempting to use the "make" command to compile the program as the programs instructions dictate. However when I do I get this error: /usr/include/stdc-predef.h:30:26: fatal error: bits/predefs.h: No such file or directory compilation terminated. Looking around on the internet some people seem to address this problem by sudo apt-get install libc6-dev-i386 I checked to see if this package was installed and it was not. When I try to install it I get E: Unable to locate package libc6-dev-i386 I have already run sudo apt get update Im sure this is a rookie question but any help is appreciated, I'm running 13.10 32-bit. UPDATE: I've tried other suggestions I've found on similar error. All I have managed is a different but similar error. Here is what I get. Geoffrey@Geoffrey-Latitude-E6400:/usr/local/src/trinityrnaseq_r2013_08_14$ make Using gnu compiler for Inchworm and Chrysalis cd Inchworm && (test -e configure || autoreconf) \ && ./configure --prefix=`pwd` && make install checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... no checking for mawk... mawk checking whether make sets $(MAKE)... yes checking for g++... g++ checking for C++ compiler default output file name... a.out checking whether the C++ compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C++ compiler... yes checking whether g++ accepts -g... yes checking for style of include used by make... GNU checking dependency style of g++... gcc3 checking for library containing cos... none required configure: creating ./config.status config.status: creating Makefile config.status: creating src/Makefile config.status: creating config.h config.status: config.h is unchanged config.status: executing depfiles commands make[1]: Entering directory `/usr/local/src/trinityrnaseq_r2013_08_14/Inchworm' Making install in src make[2]: Entering directory `/usr/local/src/trinityrnaseq_r2013_08_14/Inchworm/src' if g++ -DHAVE_CONFIG_H -I. -I. -I.. -pedantic -fopenmp -Wall -Wextra -Wno-long-long -Wno-deprecated -m64 -g -O2 -MT Fasta_entry.o -MD -MP -MF ".deps/Fasta_entry.Tpo" -c -o Fasta_entry.o Fasta_entry.cpp; \ then mv -f ".deps/Fasta_entry.Tpo" ".deps/Fasta_entry.Po"; else rm -f ".deps/Fasta_entry.Tpo"; exit 1; fi In file included from Fasta_entry.hpp:4:0, from Fasta_entry.cpp:1: /usr/include/c++/4.8/string:38:28: fatal error: bits/c++config.h: No such file or directory #include <bits/c++config.h> ^ compilation terminated. make[2]: *** [Fasta_entry.o] Error 1 make[2]: Leaving directory `/usr/local/src/trinityrnaseq_r2013_08_14/Inchworm/src' make[1]: *** [install-recursive] Error 1 make[1]: Leaving directory `/usr/local/src/trinityrnaseq_r2013_08_14/Inchworm' make: *** [inchworm] Error 2

    Read the article

  • How To Rip an Audio CD to FLAC with Foobar2000

    - by Mysticgeek
    Foobar2000 is a great audio player that is fully customizable, is light on system resources, and contains a lot of tools and features. Today we show you how to use it to rip an audio CD to FLAC format. Note: For this tutorial we’re going to assume this is the first time you’re ripping a disc with Foobar2000. We’re running it on Windows 7 Ultimate 64-bit. Install Foobar2000 and FLAC First download and install Foobar2000 (link below). The main thing you’ll want to make sure to enable during the install process is Audio CD Support… And the freedb Tagger which are located under Optional Features, then continue through the rest of the install wizard. Next you need to install the latest version of the FLAC codec (link below) following the defaults. Rip Audio CD To rip a CD, place it in your CDROM drive, launch Foobar2000 and click File \ Open Audio CD. Select the appropriate CD drive and click the Rip button. Next you’ll want to lookup the disc information with freedb…or you can manually enter in the track data if it’s a custom disc. Select the proper tag information in the freedb tagger window, then click Update files. The data will be entered in, make sure the radio button next to Go to the Converter Setup dialog is selected, and click the Rip button. In the Converter Setup screen, here you can select the output format, where in our case we’re selecting FLAC. In this window you can choose several other options like the output path, merging the tracks into one or individual files…etc. When you have those settings completed click OK. Next you’ll need to find flac.exe which is located wherever you installed it. On our 64-bit Windows 7 system the default path is C:\Program Files (x86)\FLAC Now wait while your CD is ripped and converted to FLAC. You’ll get a Converter Status Report…after you’ve checked it over you can close out of it. If you set the option to show the output files after conversion you can take a look, make sure all tracks were converted, and play them right away if you want. You can play the tracks in Foobar2000 or any player that supports FLAC. If you want to use WMC or WMP see our article on how to play FLAC files in Windows 7 Media Center or Player. That’s all there is to it! If you’re a fan of Foobar2000 and enjoy your music converted to FLAC format, Foobar2000 does the job quite well. There are a lot of customizations and tools you can use in Foobar2000 that we’ll be taking a look at in future articles. For more information check out our look at this fully customizable music player. Foobar2000 run on XP, Vista, and Windows 7 Links Download Foobar2000 Download FLAC Similar Articles Productive Geek Tips Using Ubuntu: What Package Did This File Come From?Easily Change Audio File Formats with XRECODEFoobar2000 is a Fully Customizable Music PlayerConvert Virtually Any Audio Format with XRECODE IIExtract Audio from a Video File with Pazera Free Audio Extractor TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Download Free MP3s from Amazon Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job? Find Downloads and Add-ins for Outlook

    Read the article

  • Getting a Database into Source Control

    - by Grant Fritchey
    For any number of reasons, from simple auditing, to change tracking, to automated deployment, to integration with application development processes, you’re going to want to place your database into source control. Using Red Gate SQL Source Control this process is extremely simple. SQL Source Control works within your SQL Server Management Studio (SSMS) interface.  This means you can work with your databases in any way that you’re used to working with them. If you prefer scripts to using the GUI, not a problem. If you prefer using the GUI to having to learn T-SQL, again, that’s fine. After installing SQL Source Control, this is what you’ll see when you open SSMS:   SQL Source Control is now a direct piece of the SSMS environment. The key point initially is that I currently don’t have a database selected. You can even see that in the SQL Source Control window where it shows, in red, “No database selected – select a database in Object Explorer.” If I expand my Databases list in the Object Explorer, you’ll be able to immediately see which databases have been integrated with source control and which have not. There are visible differences between the databases as you can see here:   To add a database to source control, I first have to select it. For this example, I’m going to add the AdventureWorks2012 database to an instance of the SVN source control software (I’m using uberSVN). When I click on the AdventureWorks2012 database, the SQL Source Control screen changes:   I’m going to need to click on the “Link database to source control” text which will open up a window for connecting this database to the source control system of my choice.  You can pick from the default source control systems on the left, or define one of your own. I also have to provide the connection string for the location within the source control system where I’ll be storing my database code. I set these up in advance. You’ll need two. One for the main set of scripts and one for special scripts called Migrations that deal with different kinds of changes between versions of the code. Migrations help you solve problems like having to create or modify data in columns as part of a structural change. I’ll talk more about them another day. Finally, I have to determine if this is an isolated environment that I’m going to be the only one use, a dedicated database. Or, if I’m sharing the database in a shared environment with other developers, a shared database.  The main difference is, under a dedicated database, I will need to regularly get any changes that other developers have made from source control and integrate it into my database. While, under a shared database, all changes for all developers are made at the same time, which means you could commit other peoples work without proper testing. It all depends on the type of environment you work within. But, when it’s all set, it will look like this: SQL Source Control will compare the results between the empty folders in source control and the database, AdventureWorks2012. You’ll get a report showing exactly the list of differences and you can choose which ones will get checked into source control. Each of the database objects is scripted individually. You’ll be able to modify them later in the same way. Here’s the list of differences for my new database:   You can select/deselect all the objects or each object individually. You also get a report showing the differences between what’s in the database and what’s in source control. If there was already a database in source control, you’d only see changes to database objects rather than every single object. You can see that the database objects can be sorted by name, by type, or other choices. I’m going to add a comment such as “Initial creation of database in source control.” And then click on the Commit button which will put all the objects in my database into the source control system. That’s all it takes to get the objects into source control initially. Now is when things can get fun with breaking changes to code, automated deployments, unit testing and all the rest.

    Read the article

  • How to Sync Any Browser’s Bookmarks With Your iPad or iPhone

    - by Chris Hoffman
    Apple makes it easy to synchronize bookmarks between the Safari browser on a Mac and the Safari browser on iOS, but you don’t have to use Safari — or a Mac — to sync your bookmarks back and forth. You can do this with any browser. Whether you’re using Chrome, Firefox, or even Internet Explorer, there’s a way to sync your browser bookmarks so you can access your same bookmarks on your iPad. Safari on a Mac Apple’s iCloud service is the officially supported way to sync data with your iPad or iPhone. It’s included on Macs, but Apple also offers similar iCloud bookmark syncing features for Windows. On a Mac, this should be enabled by default. To check whether it’s enabled, you can launch the System Preferences panel on your Mac, open the iCloud preferences panel, and ensure the Safari option is checked. If you’re using Safari on Windows — well, you shouldn’t be. Apple is no longer updating Safari for Windows. iCloud allows you to synchronize bookmarks between other browsers on your Windows system and Safari on your iOS device, so Safari isn’t necessary. Internet Explorer, Firefox, or Chrome via iCloud To get started, download Apple’s iCloud Control Panel application for Windows and install it. Launch the iCloud Control Panel and log in with the same iCloud account (Apple ID) you use on your iPad or iPhone. You’ll be able to enable Bookmark syncing with Internet Explorer, Firefox, or Chrome. Click the Options button to select the browser you want to synchronize bookmarks with. (Note that bookmarks are called “favorites” in Internet Explorer.) You’ll be able to access your synced bookmarks in the Safari browser on your iPad or iPhone, and they’ll sync back and forth automatically over the Internet. Google Chrome Sync Google Chrome also has its own built-in sync feature and Google provides an official Chrome app for iPad and iPhone. If you’re a Chrome user, you can set up Chrome Sync on your desktop version of Chrome — you should already have this enabled if you have logged into your Chrome browser. You can check if this Chrome Sync is enabled by opening Chrome’s settings screen and seeing whether you’re signed in. Click the Advanced sync settings button and ensure bookmark syncing is enabled. Once you have Chrome Sync set up, you can install the Chrome app from the App Store and sign in with the same Google account. Your bookmarks, as well as other data like your open browser tabs, will automatically sync. This can be a better solution because the Chrome browser is available for so many platforms and you gain the ability to synchronize other browser data, such as your open browser tabs, between your devices. Unfortunately, the Chrome browser is slower than Apple’s own Safari browser on iPad and iPhone because of the way Apple limits third-party browsers, so using it involves a trade-off. Manual Bookmark Sync in iTunes iTunes also allows you to sync bookmarks between your computer and your iPad or iPhone. It does this the old-fashioned way, by initiating a manual sync when your device is plugged in via USB. To access this option, connect your device to your computer, select the device in iTunes, and click the Info tab. This is the more outdated way of synchronizing your bookmarks. This feature may be useful if you want to create a one-time copy of your bookmarks from your PC, but it’s nowhere near ideal for regular syncing. You don’t have to use this feature, just as you really don’t have to use iTunes anymore. In fact, this option is unavailable if you’ve set up iCloud syncing in iTunes. After you set up bookmark syncing via iCloud or Chrome Sync, bookmarks will sync immediately after you save, remove, or edit them.     

    Read the article

  • Having Fun with Coding4Fun&rsquo;s Windows Phone 7 Controls

    - by mbcrump
    I’m a big believer in having a hobby project as you can probably tell from the first sentence in my “personal webpage using Silverlight” article. One of my current hobby projects is to re-do my current WP7 application in the marketplace. I knew up front that I needed a “Loading” animation and a better “About” box. After starting to develop my own, I noticed a great set of WP7 controls by Coding4Fun and decided to use them in my new application. Before I go any further they are FREE and Open-Source. It is really simple to get started, just go to the CodePlex site and click the download button. After you have downloaded it then extract it to a Folder and you will have 4 DLL files. They are listed below: Now create a Windows Phone 7 Project and add references to the DLL’s by right clicking on the References folder and clicking “Add references”.   After adding the references, we can get started. I needed a ProgressOverlay animation or “Loading Screen” while my RSS feed is downloading. Basically, you just need to add the following namespace to whatever page you want the control on: xmlns:Controls="clr-namespace:Coding4Fun.Phone.Controls;assembly=Coding4Fun.Phone.Controls" And then the code inside your Grid or wherever you want the Loading screen placed. <Controls:ProgressOverlay Name="progressOverlay" > <Controls:ProgressOverlay.Content> <TextBlock>Loading</TextBlock> </Controls:ProgressOverlay.Content> </Controls:ProgressOverlay> Bam, you now have a great looking loading screen. Of course inside the ProgressOverlay, you may want to add a Visibility property to turn it off after your data loads if you are using MVVM or similar pattern.   Next up, I needed a nice clean “About Box” that looks good but is also functional. Meaning, if they click on my twitter name, web or email to launch the appropriate task. Again, this is only a few lines of code: var p = new AboutPrompt(); p.VersionNumber = "2.0"; p.Show("Michael Crump", "@mbcrump", "[email protected]", @"http://michaelcrump.net"); A nice clean “About” box with just a few lines of code! I’m all for code that I don’t have to write. It also comes with a pretty sweet InputPrompt for grabbing info from a user: The code for this is also very simple: InputPrompt input = new InputPrompt(); input.Completed += (s, e) => { MessageBox.Show(e.Result.ToString()); }; input.Title = "Input Box"; input.Message = "What does a \"Developer Large\" T-Shirt Mean? "; input.Show(); I also enjoyed the PhoneHelper that allows you to get data out of the WMAppManifest File very easy. So for example if I wanted the Version info from the WMAppManifest file. I could write one line and get it. PhoneHelper.GetAppAttribute("Version") Of course you would want to make sure you add the following using statement: using Coding4Fun.Phone.Controls.Data; You can’t have all these cool controls without a great set of Converters. The included BooleanToVisibility converter will convert a Boolean to and from a Visibility value. This is excellent when using something like a CheckBox to display a TextBox when its checked. See the example below: The code is below: <phone:PhoneApplicationPage.Resources> <Converters:BooleanToVisibilityConverter x:Key="BooleanToVisibilityConverter"/> </phone:PhoneApplicationPage.Resources> <CheckBox x:Name="checkBox"/> <TextBlock Text="Display Text" Visibility="{Binding ElementName=checkBox, Path=IsChecked, Converter={StaticResource BooleanToVisibilityConverter} }"/> That’s not all the goodies included. They also provide a RoundedButton, TimePicker and several other converters. The documentation is great and I would recommend you give them a shot if you need any of this functionality. Btw, thank Brian Peek for his awesome work on Coding4Fun!  Subscribe to my feed

    Read the article

  • Best Practices for Handing over Legacy Code

    - by PersonalNexus
    In a couple of months a colleague will be moving on to a new project and I will be inheriting one of his projects. To prepare, I have already ordered Michael Feathers' Working Effectively with Legacy Code. But this books as well as most questions on legacy code I found so far are concerned with the case of inheriting code as-is. But in this case I actually have access to the original developer and we do have some time for an orderly hand-over. Some background on the piece of code I will be inheriting: It's functioning: There are no known bugs, but as performance requirements keep going up, some optimizations will become necessary in the not too distant future. Undocumented: There is pretty much zero documentation at the method and class level. What the code is supposed to do at a higher level, though, is well-understood, because I have been writing against its API (as a black-box) for years. Only higher-level integration tests: There are only integration tests testing proper interaction with other components via the API (again, black-box). Very low-level, optimized for speed: Because this code is central to an entire system of applications, a lot of it has been optimized several times over the years and is extremely low-level (one part has its own memory manager for certain structs/records). Concurrent and lock-free: While I am very familiar with concurrent and lock-free programming and have actually contributed a few pieces to this code, this adds another layer of complexity. Large codebase: This particular project is more than ten thousand lines of code, so there is no way I will be able to have everything explained to me. Written in Delphi: I'm just going to put this out there, although I don't believe the language to be germane to the question, as I believe this type of problem to be language-agnostic. I was wondering how the time until his departure would best be spent. Here are a couple of ideas: Get everything to build on my machine: Even though everything should be checked into source code control, who hasn't forgotten to check in a file once in a while, so this should probably be the first order of business. More tests: While I would like more class-level unit tests so that when I will be making changes, any bugs I introduce can be caught early on, the code as it is now is not testable (huge classes, long methods, too many mutual dependencies). What to document: I think for starters it would be best to focus documentation on those areas in the code that would otherwise be difficult to understand e.g. because of their low-level/highly optimized nature. I am afraid there are a couple of things in there that might look ugly and in need of refactoring/rewriting, but are actually optimizations that have been out in there for a good reason that I might miss (cf. Joel Spolsky, Things You Should Never Do, Part I) How to document: I think some class diagrams of the architecture and sequence diagrams of critical functions accompanied by some prose would be best. Who to document: I was wondering what would be better, to have him write the documentation or have him explain it to me, so I can write the documentation. I am afraid, that things that are obvious to him but not me would otherwise not be covered properly. Refactoring using pair-programming: This might not be possible to do due to time constraints, but maybe I could refactor some of his code to make it more maintainable while he was still around to provide input on why things are the way they are. Please comment on and add to this. Since there isn't enough time to do all of this, I am particularly interested in how you would prioritize.

    Read the article

  • Reconciling the Boy Scout Rule and Opportunistic Refactoring with code reviews

    - by t0x1n
    I am a great believer in the Boy Scout Rule: Always check a module in cleaner than when you checked it out." No matter who the original author was, what if we always made some effort, no matter how small, to improve the module. What would be the result? I think if we all followed that simple rule, we'd see the end of the relentless deterioration of our software systems. Instead, our systems would gradually get better and better as they evolved. We'd also see teams caring for the system as a whole, rather than just individuals caring for their own small little part. I am also a great believer in the related idea of Opportunistic Refactoring: Although there are places for some scheduled refactoring efforts, I prefer to encourage refactoring as an opportunistic activity, done whenever and wherever code needs to cleaned up - by whoever. What this means is that at any time someone sees some code that isn't as clear as it should be, they should take the opportunity to fix it right there and then - or at least within a few minutes Particularly note the following excerpt from the refactoring article: I'm wary of any development practices that cause friction for opportunistic refactoring ... My sense is that most teams don't do enough refactoring, so it's important to pay attention to anything that is discouraging people from doing it. To help flush this out be aware of any time you feel discouraged from doing a small refactoring, one that you're sure will only take a minute or two. Any such barrier is a smell that should prompt a conversation. So make a note of the discouragement and bring it up with the team. At the very least it should be discussed during your next retrospective. Where I work, there is one development practice that causes heavy friction - Code Review (CR). Whenever I change anything that's not in the scope of my "assignment" I'm being rebuked by my reviewers that I'm making the change harder to review. This is especially true when refactoring is involved, since it makes "line by line" diff comparison difficult. This approach is the standard here, which means opportunistic refactoring is seldom done, and only "planned" refactoring (which is usually too little, too late) takes place, if at all. I claim that the benefits are worth it, and that 3 reviewers will work a little harder (to actually understand the code before and after, rather than look at the narrow scope of which lines changed - the review itself would be better due to that alone) so that the next 100 developers reading and maintaining the code will benefit. When I present this argument my reviewers, they say they have no problem with my refactoring, as long as it's not in the same CR. However I claim this is a myth: (1) Most of the times you only realize what and how you want to refactor when you're in the midst of your assignment. As Martin Fowler puts it: As you add the functionality, you realize that some code you're adding contains some duplication with some existing code, so you need to refactor the existing code to clean things up... You may get something working, but realize that it would be better if the interaction with existing classes was changed. Take that opportunity to do that before you consider yourself done. (2) Nobody is going to look favorably at you releasing "refactoring" CRs you were not supposed to do. A CR has a certain overhead and your manager doesn't want you to "waste your time" on refactoring. When it's bundled with the change you're supposed to do, this issue is minimized. The issue is exacerbated by Resharper, as each new file I add to the change (and I can't know in advance exactly which files would end up changed) is usually littered with errors and suggestions - most of which are spot on and totally deserve fixing. The end result is that I see horrible code, and I just leave it there. Ironically, I feel that fixing such code not only will not improve my standings, but actually lower them and paint me as the "unfocused" guy who wastes time fixing things nobody cares about instead of doing his job. I feel bad about it because I truly despise bad code and can't stand watching it, let alone call it from my methods! Any thoughts on how I can remedy this situation ?

    Read the article

  • GCC 4.2.1 Compiling on Cygwin(Win7 64bit) for iPhone [closed]

    - by Kenneth Noland
    Hey This is going to take a long while to explain, but the short version is that I am currently attempting to compile the LLVM GCC frontend for ARMv7 to compile apps for the Cortex-A8(iPhone 3GS). I'm running into an error from LD when compiling libgcc(part of the gcc compilation process) that has been driving me mad! The command is this: /usr/llvm-gcc-4.2-2.8.source/build/./gcc/xgcc \ -B/usr/llvm-gcc-4.2_2.8.source/build/./gcc \ -B/usr/local/arm-apple-darwin/bin \ -B/usr/local/arm-apple-darwin/lib \ -isystem /usr/local/arm-apple-darwin/include \ -isystem /usr/local/arm-apple-darwin/sys-include \ -O2 -g -W -Wall -Wwrite-strings -wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-inline -dynamiclib -nodefaultlibs -W1,-dead_strip \ -marm \ -install_name /usr/local/arm-apple-darwin/lib/libgcc_s.1.dylib \ -single_module -o ./libgcc_s.1.dylib.tmp \ -W1,-exported_symbols_list,libgcc/./libgcc.map -compatibility_version 1 -current_version 1.0 -DIN_GCC -DCROSS_DIRECTORY_STRUCTURE -DHAVE_GTHR_DEFAULT -DIN_LIBGCC2 -D__GCC_FLOAT_NOT_NEEDED -Dinhibit_libc \ ... long list of .o files ... \ -lc And the result is typically a lot of undefined references to malloc, free, exit, etc. which typically indicate that libc is not getting compiled in. After going through the list of errors that ld is throwing, I see at the top that it is attempting to pull in /usr/lib/libc.a and complains that it is not the correct platform. Okay, that makes sense, so I spent 5 minutes on google and found an answer. Turns out that if I copy the libSystem.dylib and rename it to libc.dylib, that should solve the problem, but it doesn't. I couldn't find a copy of that file on my phone, so I pulled it directly from the SDK. I then get this strange error: ld64: in /usr/local/arm-apple-darwin/lib/libc.dylib, can't re-map file, errno=22 At this point, I did everything I could think of. I grabbed a fresh copy of my /usr/lib folder from my iphone and confirmed that libSystem.dylib(and libSystem.B.dylib) wasn't there. I unpacked the raw .ipsw package for iOS 4.2.1 and once again, I could not find a copy of libSystem.dylib there either. I unpacked the iPhoneSDK and MacOS SDK and I managed to find a copy of it in both, but that error just kept persisting. I copied libSystem.dylib, libSystem.B.dylib, tried all sorts of combinations of renaming to libc.dylib and still nothing but errors. I can't find a way to get it to recognize the file and link against it. I also tried linking against the libc.a located in the iphone SDK and that didn't work either. I checked what ./xgcc was firing off, and it was my freshly built copy of arm-apple-darwin-ld64 which should be fine. A little bit of background here. I built LLVM+Clang 2.8 with no errors, and I rebuilt the ODCCTools with some light modifications to get it to compile on Cygwin(I'll post my changes in a patch along with a tutorial if I can get this to work). I also grabbed the iphone-dev "includes" and "csu" project and those completed successfully, although there really is no point to them since I can't get it to link against crt0.a. I'm running out of ideas here. Can anyone help me out on this?

    Read the article

  • Reflections on Life

    - by MOSSLover
    I haven’t written a blog post in a while.  I understand there is blog neglect going on, but there is a lot going on in my life.  I am trying really hard to embrace the change and roll with everything thrown my way.  I had a really hard year it was not my best and it was not my worst.  I cannot say it was entirely hard, because January 1st I received the MVP Award.  If you know me you know the three things that happened starting in August, but if you really know me it was miserable for a substantial period of time prior to August.  There was some personal life issues I neglected to deal with that came into a headway.  Anyway I’d like to think that as of today I am doing much better.  I finally went to Paris and London.  I found out I love Paris and Nottingham.  I think that London is something I need to visit a few more times.  I would love to go back to the UK and France.  I think I’d love to live overseas someday, but not anytime soon. The past few weeks were like a whirlwind experience.  I felt like I had been sitting around for months just waiting for this trip and the big move.  Maybe it was something I was waiting to do for several years.  I needed a big change.  I needed to get unstuck.  I feel like August, however horrible it was, helped me get to the point where I am somewhere happy.  For at least two years I have been miserable outside of my work (community and otherwise).  I was just downright unhappy.  One of my coworkers said that my tweets were just horrible this past year.  Depressing might I add.  I agree they were incredibly depressing for the past several years.  But things are on an upturn.  I decided a month or so ago that I was going to do all the things I have wanted to do without looking back.  So I dove into this trip and into this move to NYC head first.  I was scared for a bit and I didn’t think it would come through.  Everyone friend-wise and coworker-wise has helped me accomplish this great feat.  I am now a New Yorker and as of January 1st 100% living in the city. Thank you for those who have checked up on me.  Thank you for those who listened to all my problems and continue to do so.  Thank you to everyone who has helped me through this really terrible time.  You guys mean the world to me.  You are my friends.  Some of you I have not met and some of you I barely know.  I have been to a lot of events where people just walked up to me and asked me if I was doing ok.  I will continue to keep moving forward one foot in front of the other.  If I ever get so down again please remind me about this year.  I hope to see you all in the upcoming year as I attend more events.  Have a good night or a good morning or a good afternoon.  I will catch you all later. Technorati Tags: Life,2011,Disaster Year,Happinness

    Read the article

  • Unable to either locate any wireless networks nor even connect to wifi

    - by Leo Chan
    I'm new to Linux. I currently have installed ubuntu 12.10. I had a previous problem with my wireless card (see url to see previous problem : How to enable wireless in a Fujitsu LH532?). It now shows Connect to hidden network and create new wireless network but now unfortunately it simply cannot find any wireless connections. I did have a very thorough look around about this problem such as wait a little longer since sometimes it cannot load all the wireless connections available that quickly. My wifi is a hidden network and I have used the connect to hidden network feature but it keeps asking for my wep key which has been checked 4 times (I counted) and it still seems to not work; It keeps asking for the WEP key. I did try both WEP 40/128-bit key and WPA & WPA2 since previously on my windows it worked; My family later decided to use WEP. I only have a quick fix using a usb wireless stick and I wish to have a more solid fix. Thanks Results from sudo iwlist wlan0 scan wlan0 Scan completed : Cell 01 - Address: 00:1E:73:C8:62:BD Channel:6 Frequency:2.437 GHz (Channel 6) Quality=25/70 Signal level=-85 dBm Encryption key:on ESSID:"EnigmaHome" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s Bit Rates:6 Mb/s; 9 Mb/s; 12 Mb/s; 18 Mb/s; 24 Mb/s 36 Mb/s; 48 Mb/s; 54 Mb/s Mode:Master Extra:tsf=000000cb3bb10a5c Extra: Last beacon: 696ms ago IE: Unknown: 000A456E69676D61486F6D65 IE: Unknown: 010482848B96 IE: Unknown: 030106 IE: Unknown: 0706484B20010B1E IE: Unknown: 2A0107 IE: Unknown: 32080C1218243048606C IE: Unknown: DD180050F2020101000003A4000027A4000042435E0062322F00 Cell 02 - Address: C8:3A:35:34:C1:60 Channel:6 Frequency:2.437 GHz (Channel 6) Quality=22/70 Signal level=-88 dBm Encryption key:on ESSID:"Tenda" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 9 Mb/s 18 Mb/s; 36 Mb/s; 54 Mb/s Bit Rates:6 Mb/s; 12 Mb/s; 24 Mb/s; 48 Mb/s Mode:Master Extra:tsf=000001336e70ffdd Extra: Last beacon: 716ms ago IE: Unknown: 000554656E6461 IE: Unknown: 010882848B961224486C IE: Unknown: 030106 IE: Unknown: 32040C183060 IE: Unknown: 0706434E20010D10 IE: Unknown: 33082001020304050607 IE: Unknown: 33082105060708090A0B IE: Unknown: DD270050F204104A0001101044000101104700102880288028801880A880C83A3534C160103C000101 IE: Unknown: 050400010000 IE: Unknown: 2A0106 IE: Unknown: 2D1AEC0117FFFF0000000000000000000000000000000C0000000000 IE: Unknown: 3D1606000500000000000000000000000000000000000000 IE: Unknown: 7F0101 IE: IEEE 802.11i/WPA2 Version 1 Group Cipher : CCMP Pairwise Ciphers (1) : CCMP Authentication Suites (1) : PSK Preauthentication Supported IE: Unknown: DD180050F2020101000003A4000027A4000042435E0062322F00 IE: Unknown: 0B05010089127A IE: Unknown: DD1E00904C33EC0117FFFF0000000000000000000000000000000C0000000000 IE: Unknown: DD1A00904C3406000500000000000000000000000000000000000000 IE: Unknown: DD07000C4304000000 Cell 03 - Address: 00:1E:73:C8:62:BF Channel:6 Frequency:2.437 GHz (Channel 6) Quality=47/70 Signal level=-63 dBm Encryption key:on ESSID:"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s Bit Rates:6 Mb/s; 9 Mb/s; 12 Mb/s; 18 Mb/s; 24 Mb/s 36 Mb/s; 48 Mb/s; 54 Mb/s Mode:Master Extra:tsf=000000cb3bac614e Extra: Last beacon: 1064ms ago IE: Unknown: 00110000000000000000000000000000000000 IE: Unknown: 010482848B96 IE: Unknown: 030106 IE: Unknown: 050C010200000000000000000000 IE: Unknown: 0706484B20010B1E IE: Unknown: 2A0107 IE: Unknown: 32080C1218243048606C IE: Unknown: DD070050F202000100

    Read the article

  • Error:couldn't read file-Kernel panic

    - by Thanos
    I have just installed ubuntu 12.04.1. To be honest I had to run installation several times until it was finished fine. When I finally managed to install it properly, I power on the laptop and the grub shoed up! I selected ubuntu generic. It takes some time to load and when it does I get an error message stating that error: couldn't read file Press any key to continue If I press any button nothing happens. If a leave it there, in a short while there is a black screen loading which gives some weird messages [0.946710] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block (0,0) [0.946755] Pid: 1, comm: swapper/0 Not tainted 3.2.0-29-generic #46-Ubuntu [0.946792] Call Trace: [0.946831] [<ffffffff81640ec8>] panic+0x91/0x1a4 [0.946869] [<ffffffff81cfc01e>] mount_block_root+0xdc/0x18e [0.946909] [<ffffffff81002930>] ? populate_rootfs_wait+0x300/0x9d0 [0.946947] [<ffffffff81cfc257>] mount_root+0x54/0x59 [0.946982] [<ffffffff81cfcec9>] prepare_namespace+0x16d/0x1a6 [0.947019] [<ffffffff81cfbd63>] kernel_init+0x153/0x158 [0.947094] [<ffffffff81cfbc10>] ? start_kernel+0x3bd/0x3bd [0.947129] [<ffffffff81664030>] ? gs_change+0x13/0x13 The thing is that the laptop isn't mine. A friend tried to dual boot ubuntu alongside windows 7 but he didn't succeed. Ubuntu option was in grub, but when you tried to boot it rebooted from the start. So from a Live CD I erased ubuntu, started windows to check if something went wrong, and fortunately everything was OK. Windows started normally! So I tried to install ubuntu. Before installation was completed the installer crashed! I was afraid that he would lost windows, something that was true... At that point I tried to install windows but whichever distro(XP, 7{home, proffessional, ultimate},8) I tried it could never reach the end. So I tried to reinstall ubuntu but I was facing those weird messages. What can I do to move on? ______________________________________________________________________ EDIT1: I tried to check and fix(if possible) with GParted it took a lot of hours,although gparted displays only 01:14, I restarted the system and now I get not exactly the same messages. Numbers in braces [ ] are different [0.818189] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block (0,0) [0.818235] Pid: 1, comm: swapper/0 Not tainted 3.2.0-29-generic #46-Ubuntu [0.818272] Call Trace: [0.818312] [<ffffffff81640ec8>] panic+0x91/0x1a4 [0.818351] [<ffffffff81cfc01e>] mount_block_root+0xdc/0x18e [0.818391] [<ffffffff81002930>] ? populate_rootfs_wait+0x300/0x9d0 [0.818428] [<ffffffff81cfc257>] mount_root+0x54/0x59 [0.818464] [<ffffffff81cfcec9>] prepare_namespace+0x16d/0x1a6 [0.818501] [<ffffffff81cfbd63>] kernel_init+0x153/0x158 [0.818574] [<ffffffff81cfbc10>] ? start_kernel+0x3bd/0x3bd [0.818610] [<ffffffff81664030>] ? gs_change+0x13/0x13 What on earth is going on? ______________________________________________________________________ EDIT2: I forgot to mention that my friend gave a punch to his laptop during a game. After that his cooler became to make a weird noise so I checked and it is a bit tortuous but it is working. What I beleive must be wrong is that his HDD makes a weird noise while trying to load ubuntu, which means he might need a new HDD. Could that be true?

    Read the article

  • How to create a Global Rule that stores a document’s folder path in a custom metadata field

    - by Nicolas Montoya
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} How to create a Global Rule that stores a document’s folder path in a custom metadata field Efficiency purists would argue that redundancy is not necessary. In real life, we are willing to pay a price for performance –i.e. to have information at our fingertips. We have run into customers opting to store a document folder path as a document metadata field. They have their reasons, half of the ECM community will agree with them, and the other half would raise an eye brow. In the end, they are getting creative to achieve their document management goals. The below steps outlines how to create a Global Rule that would store a document’s folder path in a custom metadata field: Create a Global Rule via Configuration Manager > Rules Tab > Add Then check “Is global rule with priority”. Then check “Use rule activation condition”. The go to “Edit” and check the actions for this Script Properties: Then click OK, and the following rule activation condition will appear: Then Goto to the Fields Tab and add a Rule Field: Select the target Custom Metadata Field and click Ok, then check the “Is derived field”, then “Edit”, then go to the Custom Tab in the Script Properties window and enter the below custom script: <$if #active.dCollectionPath$> <$dprDerivedValue=#active.dCollectionPath$> <$else$> <$dprDerivedValue=#active.xCollectionIDPath$> <$endif$> For more information on the dCollectionPath property, check Section 8.2 Folder Services from the Oracle® Fusion Middleware Services Reference Guide for Oracle Universal Content Management 11g Release 1 (11.1.1) http://docs.oracle.com/cd/E21043_01/doc.1111/e11011/c08_folders002.htm The above rule will keep the Custom Metadata Field updated with the Folder Path information when a document is checked in via the Content Server (CS) Web Interface or the Desktop Integration Suite (DIS).

    Read the article

  • Visual Studio Exceptions dialogs

    - by Daniel Moth
    Previously I covered step 1 of live debugging with start and attach. Once the debugger is attached, you want to go to step 2 of live debugging, which is to break. One way to break under the debugger is to do nothing, and just wait for an exception to occur in your code. This is true for all types of code that you debug in Visual Studio, and let's consider the following piece of C# code:3: static void Main() 4: { 5: try 6: { 7: int i = 0; 8: int r = 5 / i; 9: } 10: catch (System.DivideByZeroException) {/*gulp. sue me.*/} 11: System.Console.ReadLine(); 12: } If you run this under the debugger do you expect an exception on line 8? It is a trick question: you have to know whether I have configured the debugger to break when exceptions are thrown (first-chance exceptions) or only when they are unhandled. The place you do that is in the Exceptions dialog which is accessible from the Debug->Exceptions menu and on my installation looks like this: Note that I have checked all CLR exceptions. I could have expanded (like shown for the C++ case in my screenshot) and selected specific exceptions. To read more about this dialog, please read the corresponding Exception Handling debugging msdn topic and all its subtopics. So, for the code above, the debugger will break execution due to the thrown exception (exactly as if the try..catch was not there), so I see the following Exception Thrown dialog: Note the following: I can hit continue (or hit break and then later continue) and the program will continue fine since I have a catch handler. If this was an unhandled exception, then that is what the dialog would say (instead of first chance exception) and continuing would crash the app. That hyperlinked text ("Open Exception Settings") opens the Exceptions dialog I described further up. The coolest thing to note is the checkbox - this is new in this latest release of Visual Studio: it is a shortcut to the checkbox in the Exceptions dialog, so you don't have to open it to change this setting for this specific exception - you can toggle that option right from this dialog. Finally, if you try the code above on your system, you may observe a couple of differences from my screenshots. The first is that you may have an additional column of checkboxes in the Exceptions dialog. The second is that the last dialog I shared may look different to you. It all depends on the Debug->Options settings, and the two relevant settings are in this screenshot: The Exception assistant is what configures the look of the UI when the debugger wants to indicate exception to you, and the Just My Code setting controls the extra column in the Exception dialog. You can read more about those options on MSDN: How to break on User-Unhandled exceptions (plus Gregg’s post) and Exception Assistant. Before I leave you to go play with this stuff a bit more, please note that this level of debugging is now available for JavaScript too, and if you are looking at the Exceptions dialog and wondering what the "GPU Memory Access Exceptions" node is about, stay tuned on the C++ AMP blog ;-) Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • How to Create and Manage Contact Groups in Outlook 2010

    - by Mysticgeek
    If you find you’re sending emails to the same people all the time during the day, it’s tedious entering in their addresses individually. Today we take a look at creating Contact Groups to make the process a lot easier. Create Contact Groups Open Outlook and click on New Items \ More Items \ Contact Group. This opens the Contract Group window. Give your group a name, click on Add Members, and select the people you want to add from your Outlook Contacts, Address Book, or Create new ones. If you select from your address book you can scroll through and add the contacts you want. If you have a large amount of contacts you might want to search for them or use Advanced Find. If you want to add a new email contact to your group, you’ll just need to enter in their display name and email address then click OK. If you want the new member added to your Contacts list then make sure Add to Contacts is checked. After you have the contacts you want in the group, click Save & Close. Now when you compose a message you should be able to type in the name of the Contact Group you created… If you want to make sure you have everyone included in the group, click on the plus icon to expand the contacts. You will get a dialog box telling you the members of the group will be shown and you cannot collapse it again. Check the box not to see the message again then click OK. Then the members of the group will appear in the To field. Of course you can enter a Contact Group into the CC or Bcc fields as well. Add or Remove Members to a Contact Group After expanding the group you might notice some contacts aren’t included, or there is an old contact you don’t want to be in the group anymore. Click on the To button… Right-click on the Contact Group and select Properties. Now you can go ahead and Add Members… Or highlight a member and remove them…when finished click Save & Close. If you need to send emails to several of the same people, creating Contact Groups is a great way to save time by not entering them individually. If you work in for a large company, creating Contact Groups by department is a must! Similar Articles Productive Geek Tips Schedule Auto Send & Receive in Microsoft OutlookCreate An Electronic Business Card In Outlook 2007Create an Email Template in Outlook 2003Clear the Auto-Complete Email Address Cache in OutlookGet Maps and Directions to Your Contacts in Outlook 2007 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7 Google Earth replacement Icon (Icons we like) Build Great Charts in Excel with Chart Advisor tinysong gives a shortened URL for you to post on Twitter (or anywhere)

    Read the article

  • Merge sort versus quick sort performance

    - by Giorgio
    I have implemented merge sort and quick sort using C (GCC 4.4.3 on Ubuntu 10.04 running on a 4 GB RAM laptop with an Intel DUO CPU at 2GHz) and I wanted to compare the performance of the two algorithms. The prototypes of the sorting functions are: void merge_sort(const char **lines, int start, int end); void quick_sort(const char **lines, int start, int end); i.e. both take an array of pointers to strings and sort the elements with index i : start <= i <= end. I have produced some files containing random strings with length on average 4.5 characters. The test files range from 100 lines to 10000000 lines. I was a bit surprised by the results because, even though I know that merge sort has complexity O(n log(n)) while quick sort is O(n^2), I have often read that on average quick sort should be as fast as merge sort. However, my results are the following. Up to 10000 strings, both algorithms perform equally well. For 10000 strings, both require about 0.007 seconds. For 100000 strings, merge sort is slightly faster with 0.095 s against 0.121 s. For 1000000 strings merge sort takes 1.287 s against 5.233 s of quick sort. For 5000000 strings merge sort takes 7.582 s against 118.240 s of quick sort. For 10000000 strings merge sort takes 16.305 s against 1202.918 s of quick sort. So my question is: are my results as expected, meaning that quick sort is comparable in speed to merge sort for small inputs but, as the size of the input data grows, the fact that its complexity is quadratic will become evident? Here is a sketch of what I did. In the merge sort implementation, the partitioning consists in calling merge sort recursively, i.e. merge_sort(lines, start, (start + end) / 2); merge_sort(lines, 1 + (start + end) / 2, end); Merging of the two sorted sub-array is performed by reading the data from the array lines and writing it to a global temporary array of pointers (this global array is allocate only once). After each merge the pointers are copied back to the original array. So the strings are stored once but I need twice as much memory for the pointers. For quick sort, the partition function chooses the last element of the array to sort as the pivot and scans the previous elements in one loop. After it has produced a partition of the type start ... {elements <= pivot} ... pivotIndex ... {elements > pivot} ... end it calls itself recursively: quick_sort(lines, start, pivotIndex - 1); quick_sort(lines, pivotIndex + 1, end); Note that this quick sort implementation sorts the array in-place and does not require additional memory, therefore it is more memory efficient than the merge sort implementation. So my question is: is there a better way to implement quick sort that is worthwhile trying out? If I improve the quick sort implementation and perform more tests on different data sets (computing the average of the running times on different data sets) can I expect a better performance of quick sort wrt merge sort? EDIT Thank you for your answers. My implementation is in-place and is based on the pseudo-code I have found on wikipedia in Section In-place version: function partition(array, 'left', 'right', 'pivotIndex') where I choose the last element in the range to be sorted as a pivot, i.e. pivotIndex := right. I have checked the code over and over again and it seems correct to me. In order to rule out the case that I am using the wrong implementation I have uploaded the source code on github (in case you would like to take a look at it). Your answers seem to suggest that I am using the wrong test data. I will look into it and try out different test data sets. I will report as soon as I have some results.

    Read the article

  • GLSL subroutine not being used

    - by amoffat
    I'm using a gaussian blur fragment shader. In it, I thought it would be concise to include 2 subroutines: one for selecting the horizontal texture coordinate offsets, and another for the vertical texture coordinate offsets. This way, I just have one gaussian blur shader to manage. Here is the code for my shader. The {{NAME}} bits are template placeholders that I substitute in at shader compile time: #version 420 subroutine vec2 sample_coord_type(int i); subroutine uniform sample_coord_type sample_coord; in vec2 texcoord; out vec3 color; uniform sampler2D tex; uniform int texture_size; const float offsets[{{NUM_SAMPLES}}] = float[]({{SAMPLE_OFFSETS}}); const float weights[{{NUM_SAMPLES}}] = float[]({{SAMPLE_WEIGHTS}}); subroutine(sample_coord_type) vec2 vertical_coord(int i) { return vec2(0.0, offsets[i] / texture_size); } subroutine(sample_coord_type) vec2 horizontal_coord(int i) { //return vec2(offsets[i] / texture_size, 0.0); return vec2(0.0, 0.0); // just for testing if this subroutine gets used } void main(void) { color = vec3(0.0); for (int i=0; i<{{NUM_SAMPLES}}; i++) { color += texture(tex, texcoord + sample_coord(i)).rgb * weights[i]; color += texture(tex, texcoord - sample_coord(i)).rgb * weights[i]; } } Here is my code for selecting the subroutine: blur_program->start(); blur_program->set_subroutine("sample_coord", "vertical_coord", GL_FRAGMENT_SHADER); blur_program->set_int("texture_size", width); blur_program->set_texture("tex", *deferred_output); blur_program->draw(); // draws a quad for the fragment shader to run on and: void ShaderProgram::set_subroutine(constr name, constr routine, GLenum target) { GLuint routine_index = glGetSubroutineIndex(id, target, routine.c_str()); GLuint uniform_index = glGetSubroutineUniformLocation(id, target, name.c_str()); glUniformSubroutinesuiv(target, 1, &routine_index); // debugging int num_subs; glGetActiveSubroutineUniformiv(id, target, uniform_index, GL_NUM_COMPATIBLE_SUBROUTINES, &num_subs); std::cout << uniform_index << " " << routine_index << " " << num_subs << "\n"; } I've checked for errors, and there are none. When I pass in vertical_coord as the routine to use, my scene is blurred vertically, as it should be. The routine_index variable is also 1 (which is weird, because vertical_coord subroutine is the first listed in the shader code...but no matter, maybe the compiler is switching things around) However, when I pass in horizontal_coord, my scene is STILL blurred vertically, even though the value of routine_index is 0, suggesting that a different subroutine is being used. Yet the horizontal_coord subroutine explicitly does not blur. What's more is, whichever subroutine comes first in the shader, is the subroutine that the shader uses permanently. Right now, vertical_coord comes first, so the shader blurs vertically always. If I put horizontal_coord first, the scene is unblurred, as expected, but then I cannot select the vertical_coord subroutine! :) Also, the value of num_subs is 2, suggesting that there are 2 subroutines compatible with my sample_coord subroutine uniform. Just to re-iterate, all of my return values are fine, and there are no glGetError() errors happening. Any ideas?

    Read the article

  • Focus on Social Relationship Management at Oracle OpenWorld

    - by Pat Ma
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} 0 0 1 422 2408 involver 20 5 2825 14.0 Normal 0 false false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Greetings from Oracle OpenWorld 2012. Today, we’re going to focus on Social Relationship Management at Oracle OpenWorld.?Social networking is touching all businesses today.  Customers are speaking about your brand right now on social media sites. Your employees are speaking to one another on social media sites. In an Oracle survey, 40% of consumers factor in Facebook recommendations when making purchasing decisions. Despite the rise of social networking, 70% of marketers report having little understanding of social media conversations happening around their brand. Oracle has invested in technologies that will help companies leverage social media technologies for their enterprise. Our suite of social products is collectively known as Social Relationship Management. Customers are using Social Relationship Management to get analytics to social media conversations around their brand, manage multiple social media channels while keeping their brand consistent, optimize internal workflows and processes, and create better customer relationships and experiences. In this example, using Social Relationship Management, a high-end national grocery chain is able to see that “Coconut Water” is trending in San Francisco. They are now able to send a $2-off coconut water coupon to shoppers who have checked into their San Francisco locations. This promotion further drives sales of coconut water in San Francisco. In another example, using Social Relationship Management, a technology company creates multiple Facebook pages and runs campaigns on them. These social campaigns are now integrated and tracked as another marketing channel in Oracle Fusion CRM. The technology company can now track and respond to a particular customer as he moves across multiple channels – without having to restart the conversation each time the customer contacts the company. Furthermore, the technology company can see in one interface what marketing channels – including social – is performing best for each promotion. Besides being a Software-as-a-Service solution, social is also a Platform-as-a-Service solution. The benefit here is that customers can extend the functionality of our social applications to suit their particular needs or create their own social application from scratch. During the Social Developer track, developers are learning how to use Java and other industry-standard programming languages to plug in social functionality to enterprise applications. To see how Social Relationship Management can help your business build better relationships and experience with customers, visit us on the web at oracle.com/social. There are a lot more social-oriented sessions left at OpenWorld. To view a schedule of the upcoming social-oriented sessions, go here.

    Read the article

  • SQL SERVER – Read Only Files and SQL Server Management Studio (SSMS)

    - by pinaldave
    Just like any other Developer or DBA SQL Server Management Studio is my favorite application. Any any moment of the time I have multiple instances of the same application are open and I am working on it. Recently, I have come across a very interesting feature in SSMS related to “Read Only” files. I believe it is a little unknown feature as well so decided to write a blog about the same. First create a read only SQL file. You can make any file read by Right Click >> Properties >> Select Attribute Read Only. Now open the same file in SQL Server Management Studio. You will find that besides the file name there is a small ‘lock’ icon. This small icon indicates that the file is read only. Now let us attempt to edit the read only file. It will let us edit the file any way we want, however when we attempt to save it, it gives following pop-up value. The options in the pop-up are self explanatory and I liked it. The goal of the read only file is to prevent users to make un-intended changes. However, when a user should have complete control over the user file. User should be aware that the file is read only but if he wants to edit the file or save as a new file the choices should be present in front of it and the pop-up menu precisely captures the same. Now let us check option related to this feature in SSMS. Go to Menu >> Options >> Environment >> Documents You will find the third option which is “Allow editing of read-only files; warn when attempt to save”. In the above scenario it was already checked. Let us uncheck the same and do the same exercise which we have done earlier. I closed all the earlier window to avoid confusion. With the new option selected when I attempt to even modify the Read Only file, it gives me totally different pop up screen. It gives me an option like “Edit In-Memory”, “Make Writeable” etc. When you select “Edit In-Memory” it allows you to edit the file and later you can save as new file – just like the earlier scenario which we have discussed. . If clicked on the Make Writeable it will remove the restriction of the Read Only and file can be edited as pleased. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Creating a branch for every Sprint

    - by Martin Hinshelwood
    There are a lot of developers using version control these days, but a feature of version control called branching is very poorly understood and remains unused by most developers in favour of Labels. Most developers think that branching is hard and complicated. Its not! What is hard and complicated is a bad branching strategy. Just like a bad software architecture a bad branch architecture, or one that is not adhered to can prove fatal to a project. We I was at Aggreko we had a fairly successful Feature branching strategy (although the developers hated it) that meant that we could have multiple feature teams working at the same time without impacting each other. Now, this had to be carefully orchestrated as it was a Business Intelligence team and many of the BI artefacts do not lend themselves to merging. Today at SSW I am working on a Scrum team delivering a product that will be used by many hundreds of developers. SSW SQL Deploy takes much of the pain out of upgrading production databases when you are not using the Database projects in Visual Studio. With Scrum each Scrum Team works for a fixed period of time on a single sprint. You can have one or more Scrum Teams involved in delivering a product, but all the work must be merged and tested, ready to be shown to the Product Owner at the the Sprint Review meeting at the end of the current Sprint. So, what does this mean for a branching strategy? We have been using a “Main” (sometimes called “Trunk”) line and doing a branch for each sprint. It’s like Feature Branching, but with only ONE feature in operation at any one time, so no conflicts Figure: DEV folder containing the Development branches.   I know that some folks advocate applying a Label at the start of each Sprint and then rolling back if you need to, but I have always preferred the security of a branch. Like: being able to create a release from Main that has Sprint3 code even while Sprint4 is being worked on. being sure I can always create a stable build on request. Being able to guarantee a version (labels are not auditable) Be able to abandon the sprint without having to delete the code (rare I know, but would be a mess if it happened) Being able to see the flow of change sets through to a safe release It helps you find invalid dependencies when merging to Main as there may be some file that is in everyone’s Sprint branch, but never got checked in. (We had this at the merge of Sprint2) If you are always operating in this way as a standard it makes it easier to then add more scrum teams in the future. Muscle memory of this way of working. Don’t Like: Additional DB space for the branches Baseless merging between sprint branches when changes are directly ported Note: I do not think we will ever attempt this! Maybe a bit tougher to see the history between sprint branches since the changes go up through Main and down to another sprint branch Note: What you would have to do is see which Sprint the changes were made in and then check the history he same file in that Sprint. A little bit of added complexity that you would have to do anyway with multiple teams. Over time, you can end up with a lot of old unused sprint branches. Perhaps destroy with /keephistory can help in this case. Note: We ALWAYS delete the Sprint branch after it has been merged into Main. That is the theory anyway, and as you can see from the images Sprint2 has already been deleted. Why take the chance of having a problem rolling back or wanting to keep some of the code, when you can just abandon a branch and start a new one? It just seems easier and less painful to use a branch to me! What do you think?   Technorati Tags: TFS,TFS2010,Software Development,ALM,Branching

    Read the article

  • Ubuntu 11.04 and 10.04 hang with black screen while installing from USB disk

    - by Bill
    I've been trying to install Ubuntu 11.04 from a USB flash stick and each time I try to boot from the USB key one of two things happen: A) The screen that asks you what you would like to do (e.g. run Ubuntu from the USB key or install it) shows up and the countdown to the default option starts to count down but as soon as I either touch the keyboard (sometimes I press enter or the arrow keys to select an option) or the countdown gets to zero the screen just locks up and nothing happens no matter how long I wait. B) When I boot from the USB key the screen will flicker for a second and then go black with a flashing white underscore at the top left corner of the screen. Again it doesn't matter how long I wait, nothing happens and pressing keys doesn't do a thing. The very first time I tried to install it I got a terminal-like screen that said something about a directory called 'casper' having an error of some sort. I have tried installing from USB using both 11.04 and 10.10. I'm about to try 10.04. I have read tons of forum posts about this but so far I haven't seen anything in the solutions that apply to me. My intention is to dual boot Windows 7 and Ubuntu. I must keep Windows as I am required to use Visual Studio for one of my college courses. Right now I'm using Wubi but I really want a full install. I can't use LVPM because it doesn't work with the version of Wubi I used. So now I'm thinking my best bet is to try to get a clean install working. I'd also convert Wubi to a full install too but there's no solution as far as I've read. So could someone tell me a reason why this is happening or if there's something I can do to get around the problem? I'm using a Gateway LT2802u netbook with and Intel Atom N455 processor, 1GB RAM, Intel Graphics Media Accelerator 3150 graphics card, and a 250GB HDD. I don't have anything on my current Wubi install that I can't replace so keep in mind when answering that I don't care if I lose my current settings and files from Wubi. Thanks everyone! UPDATE I just answered my own question so in case anyone else is having this same problem using similar hardware, do the following: When I first tried installing 11.04 I used the recommended universal installer tool to create the USB live/installation disk. That caused the original problem. Note that I had already downloaded the 11.04 ISO and did not use the included downloader from the USB creator. After that failed I used the same USB creator but had it download 10.10 for me. It also failed with the same issue. I repeated this process with unetbootin as well for both versions. Finally, I downloaded the Ubuntu 10.04 ISO and used the recommended USB creator once again. There was an error while creating the USB live install so I reformatted the USB key as FAT32 and tried again. It created the USB key. I then booted from the USB flash drive and selected "Install Ubuntu" (exact wording was different). It worked! It took me through the process that you see shown in pictures on the Ubuntu website. I let it create the appropriate partitions for me and it simply worked. I did get a few errors while the system tried to restart after it installed. It hung on a terminal-like screen but I pressed ENTER and it restarted. I booted into Windows 7, it checked the disks as it sensed that I messed with a partition, then it booted into Windows normally. Now I'm going to uninstall Wubi and update my new full install of Ubuntu! I'm excited to get the benefits of a full install now. So in the end, hopefully someone can learn from what I did.

    Read the article

  • Ubuntu 12.04 Bootloader failed to install

    - by Chris
    Sorry about the excessively long question, but I figured giving more information would be better. I recently bought a new desktop for myself, running Windows 7. It has two hard drives, and I wanted to install Ubuntu on a small partition on the second hard drive. I created 25GB "free space" in Windows and ran a LiveCD install. I wanted to select the install options myself but accidentally selected "Install alongside Windows 7," but it seemed to pick up the free space and installed itself there as I wanted it to. However, I was told that the bootloader installation had failed. I chose to "Cancel installation," leaving my computer unable to boot. I wiped my computer and reinstalled Windows. After that, I tried installing Ubuntu through Windows using WUBI, once using files from my LiveCD and once downloading everything again. Both times the install succeeded, but both times when I restarted and tried to load Ubuntu, it gave me an error - wubildr.mbr was corrupt or missing. I checked in Windows - it was indeed present on the C:\ drive. I went back to the LiveCD installation, this time going the custom options route. I assigned 16GB to an Ext4 journaling file system and 10GB to a swap file. I got the same bootloader error as before. Being prompted to select a different partition to install the bootloader to, I first tried the partition Ubuntu was installed on. A window came up saying that the install had succeeded, but a second window gave me the same error and choices as before. I went through every single option it gave me, including the Windows partition and the hard drives themselves (dev/sda, dev/sdb). Same result. I then chose to not install a bootloader. Windows still works fine, and I assume Ubuntu has installed but is unbootable. Knowing that my computer could potentially brick itself again - and, this time around, with a lot of data to lose and hassle to go through if I mess it up - I really don't want to do anything without some advice. So I'll ask this: a) Why did the bootloader fail to install? Can I fix the error and install Ubuntu fresh? b) Is there any way to get around the error, install the bootloader, and point it towards an existing installation of Ubuntu? c) Is there a quicker and easier solution I might have missed? EDIT: Thanks for the tip, AthloX. After testing the liveCD in Virtualbox with no installation problems, I looked around for some alternate bootloaders but had no success. I attempted another install, which installed the bootloader and Ubuntu just fine but bricked Windows 7. I wiped both hard disks clean, including some "System Reserved" partitions I hadn't noticed before, before re-installing Windows 7 on one hard drive and immediately afterwards installing Ubuntu on the other. Now the computer boots into Windows, but I can pop into the BIOS at startup to boot into Ubunbtu via it's bootloader, and I'm guessing it'll only take a bit of poking at the BIOS to swap the load order. Many thanks!

    Read the article

  • JMSContext, @JMSDestinationDefintion, DefaultJMSConnectionFactory with simplified JMS API: TOTD #213

    - by arungupta
    "What's New in JMS 2.0" Part 1 and Part 2 provide comprehensive introduction to new messaging features introduced in JMS 2.0. The biggest improvement in JMS 2.0 is introduction of the "new simplified API". This was explained in the Java EE 7 Launch Technical Keynote. You can watch a complete replay here. Sending and Receiving a JMS message using JMS 1.1 requires lot of boilerplate code, primarily because the API was designed 10+ years ago. Here is a code that shows how to send a message using JMS 1.1 API: @Statelesspublic class ClassicMessageSender { @Resource(lookup = "java:comp/DefaultJMSConnectionFactory") ConnectionFactory connectionFactory; @Resource(mappedName = "java:global/jms/myQueue") Queue demoQueue; public void sendMessage(String payload) { Connection connection = null; try { connection = connectionFactory.createConnection(); connection.start(); Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE); MessageProducer messageProducer = session.createProducer(demoQueue); TextMessage textMessage = session.createTextMessage(payload); messageProducer.send(textMessage); } catch (JMSException ex) { ex.printStackTrace(); } finally { if (connection != null) { try { connection.close(); } catch (JMSException ex) { ex.printStackTrace(); } } } }} There are several issues with this code: A JMS ConnectionFactory needs to be created in a application server-specific way before this application can run. Application-specific destination needs to be created in an application server-specific way before this application can run. Several intermediate objects need to be created to honor the JMS 1.1 API, e.g. ConnectionFactory -> Connection -> Session -> MessageProducer -> TextMessage. Everything is a checked exception and so try/catch block must be specified. Connection need to be explicitly started and closed, and that bloats even the finally block. The new JMS 2.0 simplified API code looks like: @Statelesspublic class SimplifiedMessageSender { @Inject JMSContext context; @Resource(mappedName="java:global/jms/myQueue") Queue myQueue; public void sendMessage(String message) { context.createProducer().send(myQueue, message); }} The code is significantly improved from the previous version in the following ways: The JMSContext interface combines in a single object the functionality of both the Connection and the Session in the earlier JMS APIs.  You can obtain a JMSContext object by simply injecting it with the @Inject annotation.  No need to explicitly specify a ConnectionFactory. A default ConnectionFactory under the JNDI name of java:comp/DefaultJMSConnectionFactory is used if no explicit ConnectionFactory is specified. The destination can be easily created using newly introduced @JMSDestinationDefinition as: @JMSDestinationDefinition(name = "java:global/jms/myQueue",        interfaceName = "javax.jms.Queue") It can be specified on any Java EE component and the destination is created during deployment. JMSContext, Session, Connection, JMSProducer and JMSConsumer objects are now AutoCloseable. This means that these resources are automatically closed when they go out of scope. This also obviates the need to explicitly start the connection JMSException is now a runtime exception. Method chaining on JMSProducers allows to use builder patterns. No need to create separate Message object, you can specify the message body as an argument to the send() method instead. Want to try this code ? Download source code! Download Java EE 7 SDK and install. Start GlassFish: bin/asadmin start-domain Build the WAR (in the unzipped source code directory): mvn package Deploy the WAR: bin/asadmin deploy <source-code>/jms/target/jms-1.0-SNAPSHOT.war And access the application at http://localhost:8080/jms-1.0-SNAPSHOT/index.jsp to send and receive a message using classic and simplified API. A replay of JMS 2.0 session from Java EE 7 Launch Webinar provides complete details on what's new in this specification: Enjoy!

    Read the article

  • When is your interview?

    - by Rob Farley
    Sometimes it’s tough to evaluate someone – to figure out if you think they’d be worth hiring. These days, since starting LobsterPot Solutions, I have my share of interviews, on both sides of the desk. Sometimes I’m checking out potential staff members; sometimes I’m persuading someone else to get us on board for a project. Regardless of who is on which side of the desk, we’re both checking each other out. The world is not how it was some years ago. I’m pretty sure that every time I walk into a room for an interview, I’ve searched for them online, and they’ve searched for me. I suspect they usually have the easier time finding me, although there are obviously other Rob Farleys in the world. They may have even checked out some of my presentations from conferences, read my blog posts, maybe even heard me tell jokes or sing. I know some people need me to explain who I am, but for the most part, I think they’ve done plenty of research long before I’ve walked in the room. I remember when this was different (as it could be for you still). I remember a time when I dealt with recruitment agents, looking for work. I remember sitting in rooms having been giving a test designed to find out if I knew my stuff or not, and then being pulled into interviews with managers who had to find out if I could communicate effectively. I’d need to explain who I was, what kind of person I was, what my value-system involved, and so on. I’m sure you understand what I’m getting at. (Oh, and in case you hadn’t realised, it’s a T-SQL Tuesday post, this month about interviews.) At TechEd Australia some years ago (either 2009 or 2010 – I forget which), I remember hearing a comment made during the ‘locknote’, the closing session. The presenter described a conversation he’d heard between two girls, discussing a guy that one of them had just started dating. The other girl expressed horror at the fact that her friend had met this guy in person, rather than through an online dating agency. The presenter pointed out that people realise that there’s a certain level of safety provided through the checks that those sites do. I’m not sure I completely trust this, but I’m sure it’s true for people’s technical profiles. If I interview someone, I hope they have a profile. I hope I can look at what they already know. I hope I can get samples of their work, and see how they communicate. I hope I can get a feel for their sense of humour. I hope I already know exactly what kind of person they are – their value system, their beliefs, their passions. Even their grammar. I can work out if the person is a good risk or not from who they are online. If they don’t have an online presence, then I don’t have this information, and the risk is higher. So if you’re interviewing with me, your interview started long before the conversation. I hope it started before I’d ever heard of you. I know the interview in which I’m being assessed started before I even knew there was a product called SQL Server. It’s reflected in what I write. It’s in the way I present. I have spent my life becoming me – so let’s talk! @rob_farley

    Read the article

  • Managing Scripts in Oracle SQL Developer

    - by thatjeffsmith
    You backup your databases, right? You backup you home computer – your media collection, tax documents, bank accounts, etc, right? You backup your handy-dandy SQL scripts, right? Ok, now that I’ve got your head nodding, I want to answer a question I get every so often: How can I manage my scripts in SQL Developer? This is an interesting question. First, it assumes that one SHOULD manage their scripts in their IDE. Now, what I think the question generally gets around to is, how can we: Navigate to our scripts Open them Execute them What a good IDE should have is an interface to your existing Version Control System (VCS.) SQL Developer supports out-of-the-box both Subversion and Git. You can also download an extension via check-for-updates to get support for CVS. Now, what I’m about to show you COULD be done without versioning and controlling your scripts – but I want to ask you why you wouldn’t want to do this? So, I’m going to proceed and assume that you do INDEED version your scripts already. Seeing what scripts you’ve already got in your repository This is very straightforward – just open the Team Versions panel. Then connect to your repository. Shows you the files in your source control system. Now, I could ‘preview’ said file right away. If I open the file from here, we get a temp file copy down from the server to the local machine. This is a local temp copy of the controlled script – I can read/execute, but not write to it. And that might be all you need. But, if your script calls other scripts, then you’re going to want to check out the server copy of your stuff down your local SVN working copy directory. That way when your script calls another script – you’re executing the PRODUCTION APPROVED copies of said scripts. And if you do SPOOL or other file I/O stuff, it will work as expected. To get to those said client copies of your scripts… Enter the Files Panel The Files panel is accessible from the View menu. You can get to your files, one of two ways. If you’ve touched the file recently, you can see it under the Recent tree. Otherwise, you can navigate to your local ‘checked out’ copies of your script(s). Open your local copies, see what’s changed, etc. And I can access the change history and see what’s been touched… What changes am I going to ‘push out’ if I commit this back to the server? Most of us work on teams, yes? This panel also gives me a heads up if someone else is making changes to the same file. I can see the ‘incoming’ changes as well. To Sum It Up… If I want to get a script to run: do a full get to your local directory open the script(s) The files panel will tell you if your local copy is out of date from the server and if you have made local changes you’ve forgotten to commit back up to the server and your fellow teammates. Now, if you’re the selfish type and don’t want to share, that’s fine. But you should still be backing up your scripts, and you can still use the Files panel to manage your scripts.

    Read the article

  • jqGrid - customizing the multi-select option (restrict single selection and adding custom events)

    - by Renso
    Goal: Using the jgGrid to enable a selection of a checkbox for row selection - which is easy to set in the jqGrid - but also only allowing a single row to be selectable at a time while adding events based on whether the row was selected or de-selected. Environment: jQuery 1.4.4 jqGrid 3.4.4a Issue: The jqGrid does not support the option to restrict the multi-select to only allow for a single selection. You may ask, why bother with the multi-select checkbox function if you only want to allow for the selection of a single row? Good question, as an example, you want to reserve the selection of a row to trigger another kind of event and use the checkbox multi-select to handle a different kind of event; in other words, when I select the row I want something entirely different to happen than when I select to check off the checkbox for that row. Also the setSelection method of the jqGrid is a toggle and has no support for determining whether the checkbox has already been selected or not, So it will simply act as a switch - which it is designed to do - but with no way out of the box to only check off the box (as in not to de-select) rather than act like a switch. Furthermore, the getGridParam('selrow') does not indicate if the row was selected or de-selected, which seems a bit strange and is the main reason for this blog post. Solution: How this will act: When you check off a multi-select checkbox in the gird, and then commence to select another row by checking off that row's multi-select checkbox - I'm not talking there about clicking on the row but using the grid's multi-select checkbox - it will de-select the previous selection so that you are always left with only a single selection. Furthermore, once you select or de-select a multi-select checkbox, fire off an event that will be determined by whether or not the row was selected or de-selected, not just merely clicked on. So if I de-select the row do one thing but when selecting it do another. Implementation (this of course is only a partial code snippet):             multiselect: true,             multiboxonly: true,             onSelectRow: function (rowId) {                 var gridSelRow = $(item).getGridParam('selrow');                 var s;                 s = $(item).getGridParam('selarrrow');                 if (!s || !s[0]) {                     $(item).resetSelection();                     $('#productLineDetails').fadeOut();                     lastsel = null;                     return;                 }                 var selected = $.inArray(rowId, s) != -1;                 if (selected) {                     $('#productLineDetails').show();                 }                 else {                     $('#productLineDetails').fadeOut();                 }                 if (rowId && rowId !== lastsel && selected) {                     $(item).GridToForm(gridSelRow, '#productLineDetails');                     if (lastsel) $(item).setSelection(lastsel, false);                 }                 lastsel = rowId;             }, In the example code above: The "item" property is the id of the jqGrid. The following to settings ensure that the jqGrid will add the new column to select rows with a checkbox and also the not allow for the selection by clicking on the row but to force the user to have to click on the multi-select checkbox to select the row: multiselect: true, multiboxonly: true, Unfortunately the var gridSelRow = $(item).getGridParam('selrow') function will only return the row the user clicked on or rather that the row's checkbox was clicked on and NOT whether or not it was selected nor de-selected, but it retrieves the row id, which is what we will need. The following piece get's all rows that have been selected so far, as in have a checked off multi-select checkbox: var s; s = $(item).getGridParam('selarrrow'); Now determine if the checkbox the user just clicked on was selected or de-selected: var selected = $.inArray(rowId, s) != -1; If it was selected then show a container "#productLineDetails", if not hide that container away. The following instruction populates a form with the grid data using the built-in GridToForm method (just mentioned here as an example) ONLY if the row has been selected and NOT de-selected but more importantly to de-select any other multi-select checkbox that may have been selected: if (rowId && rowId !== lastsel && selected) {                     $(item).GridToForm(gridSelRow, '#productLineDetails');                     if (lastsel) $(item).setSelection(lastsel, false); }

    Read the article

< Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >