Search Results

Search found 2771 results on 111 pages for 'andrew arrow'.

Page 73/111 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • How to read/write high-resolution (24-bit, 8 channel) .wav files in Java?

    - by dB'
    I'm trying to write a Java application that manipulates high resolution .wav files. I'm having trouble importing the audio data, i.e. converting the .wav file into an array of doubles. When I use a standard approach an exception is thrown. AudioFileFormat as = AudioSystem.getAudioFileFormat(new File("orig.wav")); --> javax.sound.sampled.UnsupportedAudioFileException: file is not a supported file type Here's the file format info according to soxi: dB$ soxi orig.wav soxi WARN wav: wave header missing FmtExt chunk Input File : 'orig.wav' Channels : 8 Sample Rate : 96000 Precision : 24-bit Duration : 00:00:03.16 = 303526 samples ~ 237.13 CDDA sectors File Size : 9.71M Bit Rate : 24.6M Sample Encoding: 32-bit Floating Point PCM Can anyone suggest the simplest method for getting this audio into Java? I've tried using a few techniques. As stated above, I've experimented with the Java AudioSystem (on both Mac and Windows). I've also tried using Andrew Greensted's WavFile class, but this also fails (WavFileException: Compression Code 3 not supported). One workaround is to convert the audio to 16 bits using sox (with the -b 16 flag), but this is suboptimal since it increases the noise floor. Incidentally, I've noticed that the file CAN be read by libsndfile. Is my best bet to write a jni wrapper around libsndfile, or can you suggest something quicker? Note that I don't need to play the audio, I just need to analyze it, manipulate it, and then write it out to a new .wav file. * UPDATE * I solved this problem by modifying Andrew Greensted's WavFile class. His original version only read files encoded as integer values ("format code 1"); my files were encoded as floats ("format code 3"), and that's what was causing the problem. I'll post the modified version of Greensted's code when I get a chance. In the meantime, if anyone wants it, send me a message.

    Read the article

  • Problems with my JS array undefined x 7

    - by Dave
    I have an array im trying to loop through to create a new type of array specific to my current page. My array looks like this: //$_SESSION['data'] = Array ( [0] => 1 [1] => 0 [2] => Tom [8] => 1 [4] => 1 [5] => Array ( [7] => Array ( [0] => Andrew [1] => 1 [2] => 1 [4] => 0 [5] => avatar.jpg [6] => 1 ) ) [6] => Array ( [0] => 1 [1] => 2 ) ) So in my JS file i have this: var stats = <? echo $_SESSION['data'][5]); ?> ; //this is the array my_data = new Array(); for(var key in stats){ if(key in my_data){} else { //prevent double entry my_data[key] = new Array(); my_data[key][0] = stats[key][6]; my_data[key][1] = stats[key][5]; my_data[key][2] = stats[key][2]; my_data[key][3] = stats[key][0]; } } console.log(my_data); Now in console.log i get this : [undefined × 7, Array[4] 0: "1" 1: "avatar.jpg" 2: "1" 3: "Andrew" length: 4 __proto__: Array[0] ] I'm wondering why it is saying undefined x7?

    Read the article

  • Laissez les bon temps rouler! (Microsoft BI Conference 2010)

    - by smisner
    "Laissez les bons temps rouler" is a Cajun phrase that I heard frequently when I lived in New Orleans in the mid-1990s. It means "Let the good times roll!" and encapsulates a feeling of happy expectation. As I met with many of my peers and new acquaintances at the Microsoft BI Conference last week, this phrase kept running through my mind as people spoke about their plans in their respective businesses, the benefits and opportunities that the recent releases in the BI stack are providing, and their expectations about the future of the BI stack. Notwithstanding some jabs here and there to point out the platform is neither perfect now nor will be anytime soon (along with admissions that the competitors are also not perfect), and notwithstanding several missteps by the event organizers (which I don't care to enumerate), the overarching mood at the conference was positive. It was a refreshing change from the doom and gloom hovering over several conferences that I attended in 2009. Although many people expect economic hardships to continue over the coming year or so, everyone I know in the BI field is busier than ever and expects to stay busy for quite a while. Self-Service BI Self-service was definitely a theme of the BI conference. In the keynote, Ted Kummert opened with a look back to a fairy tale vision of self-service BI that he told in 2008. At that time, the fairy tale future was a time when "every end user was able to use BI technologies within their job in order to move forward more effectively" and transitioned to the present time in which SQL Server 2008 R2, Office 2010, and SharePoint 2010 are available to deliver managed self-service BI. This set of technologies is presumably poised to address the needs of the 80% of users that Kummert said do not use BI today. He proceeded to outline a series of activities that users ought to be able to do themselves--from simple changes to a report like formatting or an addtional data visualization to integration of an additional data source. The keynote then continued with a series of demonstrations of both current and future technology in support of self-service BI. Some highlights that interested me: PowerPivot, of course, is the flagship product for self-service BI in the Microsoft BI stack. In the TechEd keynote, which was open to the BI conference attendees, Amir Netz (twitter) impressed the audience by demonstrating interactivity with a workbook containing 100 million rows. He upped the ante at the BI keynote with his demonstration of a future-state PowerPivot workbook containing over 2 billion records. It's important to note that this volume of data is being processed by a server engine, and not in the PowerPivot client engine. (Yes, I think it's impressive, but none of my clients are typically wrangling with 2 billion records at a time. Maybe they're thinking too small. This ability to work quickly with large data sets has greater implications for BI solutions than for self-service BI, in my opinion.) Amir also demonstrated KPIs for the future PowerPivot, which appeared to be easier to implement than in any other Microsoft product that supports KPIs, apart from simple KPIs in SharePoint. (My initial reaction is that we have one more place to build KPIs. Great. It's confusing enough. I haven't seen how well those KPIs integrate with other BI tools, which will be important for adoption.) One more PowerPivot feature that Amir showed was a graphical display of the lineage for calculations. (This is hugely practical, especially if you build up calculations incrementally. You can more easily follow the logic from calculation to calculation. Furthermore, if you need to make a change to one calculation, you can assess the impact on other calculations.) Another product demonstration will be available within the next 30 days--Pivot for Reporting Services. If you haven't seen this technology yet, check it out at www.getpivot.com. (It definitely has a wow factor, but I'm skeptical about its practicality. However, I'm looking forward to trying it out with data that I understand.) Michael Tejedor (twitter) demonstrated a feature that I think is really interesting and not emphasized nearly enough--overshadowed by PowerPivot, no doubt. That feature is the Microsoft Business Intelligence Indexing Connector, which enables search of the content of Excel workbooks and Reporting Services reports. (This capability existed in MOSS 2007, but was more cumbersome to implement. The search results in SharePoint 2010 are not only cooler, but more useful by describing whether the content is found in a table or a chart, for example.) This may yet be the dawning of the age of self-service BI - a phrase I've heard repeated from time to time over the last decade - but I think BI professionals are likely to stay busy for a long while, and need not start looking for a new line of work. Kummert repeatedly referenced strategic BI solutions in contrast to self-service BI to emphasize that self-service BI is not a replacement for the services that BI professionals provide. After all, self-service BI does not appear magically on user desktops (or whatever device they want to use). A supporting infrastructure is necessary, and grows in complexity in proportion to the need to simplify BI for users. It's one thing to hear the party line touted by Microsoft employees at the BI keynote, but it's another to hear from the people who are responsible for implementing and supporting it within an organization. Rob Collie (blog | twitter), Kasper de Jonge (blog | twitter), Vidas Matelis (site | twitter), and I were invited to join Andrew Brust (blog | twitter) as he led a Birds of a Feather session at TechEd entitled "PowerPivot: Is It the BI Deal-Changer for Developers and IT Pros?" I would single out the prevailing concern in this session as the issue of control. On one side of this issue were those who were concerned that they would lose control once PowerPivot is implemented. On the other side were those who believed that data should be freely accessible to users in PowerPivot, and even acknowledgment that users would get the data they want even if it meant they would have to manually enter into a workbook to have it ready for analysis. For another viewpoint on how PowerPivot played out at the conference, see Rob Collie's observations. Collaborative BI I have been intrigued by the notion of collaborative BI for a very long time. Before I discovered BI, I was a Lotus Notes developer and later a manager of developers, working in a software company that enabled collaboration in the legal industry. Not only did I help create collaborative systems for our clients, I created a complete project management from the ground up to collaboratively manage our custom development work. In that case, collaboration involved my team, my client contacts, and me. I was also able to produce my own BI from that system as well, but didn't know that's what I was doing at the time. Only in recent years has SharePoint begun to catch up with the capabilities that I had with Lotus Notes more than a decade ago. Eventually, I had the opportunity at that job to formally investigate BI as another product offering for our software, and the rest - as they say - is history. I built my first data warehouse with Scott Cameron (who has also ventured into the authoring world by writing Analysis Services 2008 Step by Step and was at the BI Conference last week where I got to reminisce with him for a bit) and that began a career that I never imagined at the time. Fast forward to 2010, and I'm still lauding the virtues of collaborative BI, if only the tools will catch up to my vision! Thus, I was anxious to see what Donald Farmer (blog | twitter) and Rita Sallam of Gartner had to say on the subject in their session "Collaborative Decision Making." As I suspected, the tools aren't quite there yet, but the vendors are moving in the right direction. One thing I liked about this session was a non-Microsoft perspective of the state of the industry with regard to collaborative BI. In addition, this session included a better demonstration of SharePoint collaborative BI capabilities than appeared in the BI keynote. Check out the video in the link to the session to see the demonstration. One of the use cases that was demonstrated was linking from information to a person, because, as Donald put it, "People don't trust data, they trust people." The Microsoft BI Stack in General A question I hear all the time from students when I'm teaching is how to know what tools to use when there is overlap between products in the BI stack. I've never taken the time to codify my thoughts on the subject, but saw that my friend Dan Bulos provided good insight on this topic from a variety of perspectives in his session, "So Many BI Tools, So Little Time." I thought one of his best points was that ideally you should be able to design in your tool of choice, and then deploy to your tool of choice. Unfortunately, the ideal is yet to become real across the platform. The closest we come is with the RDL in Reporting Services which can be produced from two different tools (Report Builder or Business Intelligence Development Studio's Report Designer), manually, or by a third-party or custom application. I have touted the idea for years (and publicly said so about 5 years ago) that eventually more products would be RDL producers or consumers, but we aren't there yet. Maybe in another 5 years. Another interesting session that covered the BI stack against a backdrop of competitive products was delivered by Andrew Brust. Andrew did a marvelous job of consolidating a lot of information in a way that clearly communicated how various vendors' offerings compared to the Microsoft BI stack. He also made a particularly compelling argument about how the existence of an ecosystem around the Microsoft BI stack provided innovation and opportunities lacking for other vendors. Check out his presentation, "How Does the Microsoft BI Stack...Stack Up?" Expo Hall I had planned to spend more time in the Expo Hall to see who was doing new things with the BI stack, but didn't manage to get very far. Each time I set out on an exploratory mission, I got caught up in some fascinating conversations with one or more of my peers. I find interacting with people that I meet at conferences just as important as attending sessions to learn something new. There were a couple of items that really caught me eye, however, that I'll share here. Pragmatic Works. Whether you develop SSIS packages, build SSAS cubes, or author SSRS reports (or all of the above), you really must take a look at BI Documenter. Brian Knight (twitter) walked me through the key features, and I must say I was impressed. Once you've seen what this product can do, you won't want to document your BI projects any other way. You can download a free single-user database edition, or choose from more feature-rich standard or professional editions. Microsoft Press ebooks. I also stopped by the O'Reilly Media booth to meet some folks that one of my acquisitions editors at Microsoft Press recommended. In case you haven't heard, Microsoft Press has partnered with O'Reilly Media for distribution and publishing. Apart from my interest in learning more about O'Reilly Media as an author, an advertisement in their booth caught me eye which I think is a really great move. When you buy Microsoft Press ebooks through the O'Reilly web site, you can receive it in any (or all) of the following formats where possible: PDF, epub, .mobi for Kindle and .apk for Android. You also have lifetime DRM-free access to the ebooks. As someone who is an avid collector of books, I fnd myself running out of room for storage. In addition, I travel a lot, and it's hard to lug my reference library with me. Today's e-reader options make the move to digital books a more viable way to grow my library. Having a variety of formats means I am not limited to a single device, and lifetime access means I don't have to worry about keeping track of where I've stored my files. Because the e-books are DRM-free, I can copy and paste when I'm compiling notes, and I can print pages when necessary. That's a winning combination in my mind! Overall, I was pleased with the BI conference. There were many more sessions that I couldn't attend, either because the room was full when I got there or there were multiple sessions running concurrently that I wanted to see. Fortunately, many of the sessions are accessible for viewing online at http://www.msteched.com/2010/NorthAmerica along with the TechEd sessions. You can spot the BI sessions by the yellow skyline on the title slide of the presentation as shown below. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Recover Data Like a Forensics Expert Using an Ubuntu Live CD

    - by Trevor Bekolay
    There are lots of utilities to recover deleted files, but what if you can’t boot up your computer, or the whole drive has been formatted? We’ll show you some tools that will dig deep and recover the most elusive deleted files, or even whole hard drive partitions. We’ve shown you simple ways to recover accidentally deleted files, even a simple method that can be done from an Ubuntu Live CD, but for hard disks that have been heavily corrupted, those methods aren’t going to cut it. In this article, we’ll examine four tools that can recover data from the most messed up hard drives, regardless of whether they were formatted for a Windows, Linux, or Mac computer, or even if the partition table is wiped out entirely. Note: These tools cannot recover data that has been overwritten on a hard disk. Whether a deleted file has been overwritten depends on many factors – the quicker you realize that you want to recover a file, the more likely you will be able to do so. Our setup To show these tools, we’ve set up a small 1 GB hard drive, with half of the space partitioned as ext2, a file system used in Linux, and half the space partitioned as FAT32, a file system used in older Windows systems. We stored ten random pictures on each hard drive. We then wiped the partition table from the hard drive by deleting the partitions in GParted. Is our data lost forever? Installing the tools All of the tools we’re going to use are in Ubuntu’s universe repository. To enable the repository, open Synaptic Package Manager by clicking on System in the top-left, then Administration > Synaptic Package Manager. Click on Settings > Repositories and add a check in the box labelled “Community-maintained Open Source software (universe)”. Click Close, and then in the main Synaptic Package Manager window, click the Reload button. Once the package list has reloaded, and the search index rebuilt, search for and mark for installation one or all of the following packages: testdisk, foremost, and scalpel. Testdisk includes TestDisk, which can recover lost partitions and repair boot sectors, and PhotoRec, which can recover many different types of files from tons of different file systems. Foremost, originally developed by the US Air Force Office of Special Investigations, recovers files based on their headers and other internal structures. Foremost operates on hard drives or drive image files generated by various tools. Finally, scalpel performs the same functions as foremost, but is focused on enhanced performance and lower memory usage. Scalpel may run better if you have an older machine with less RAM. Recover hard drive partitions If you can’t mount your hard drive, then its partition table might be corrupted. Before you start trying to recover your important files, it may be possible to recover one or more partitions on your drive, recovering all of your files with one step. Testdisk is the tool for the job. Start it by opening a terminal (Applications > Accessories > Terminal) and typing in: sudo testdisk If you’d like, you can create a log file, though it won’t affect how much data you recover. Once you make your choice, you’re greeted with a list of the storage media on your machine. You should be able to identify the hard drive you want to recover partitions from by its size and label. TestDisk asks you select the type of partition table to search for. In most cases (ext2/3, NTFS, FAT32, etc.) you should select Intel and press Enter. Highlight Analyse and press enter. In our case, our small hard drive has previously been formatted as NTFS. Amazingly, TestDisk finds this partition, though it is unable to recover it. It also finds the two partitions we just deleted. We are able to change their attributes, or add more partitions, but we’ll just recover them by pressing Enter. If TestDisk hasn’t found all of your partitions, you can try doing a deeper search by selecting that option with the left and right arrow keys. We only had these two partitions, so we’ll recover them by selecting Write and pressing Enter. Testdisk informs us that we will have to reboot. Note: If your Ubuntu Live CD is not persistent, then when you reboot you will have to reinstall any tools that you installed earlier. After restarting, both of our partitions are back to their original states, pictures and all. Recover files of certain types For the following examples, we deleted the 10 pictures from both partitions and then reformatted them. PhotoRec Of the three tools we’ll show, PhotoRec is the most user-friendly, despite being a console-based utility. To start recovering files, open a terminal (Applications > Accessories > Terminal) and type in: sudo photorec To begin, you are asked to select a storage device to search. You should be able to identify the right device by its size and label. Select the right device, and then hit Enter. PhotoRec asks you select the type of partition to search. In most cases (ext2/3, NTFS, FAT, etc.) you should select Intel and press Enter. You are given a list of the partitions on your selected hard drive. If you want to recover all of the files on a partition, then select Search and hit enter. However, this process can be very slow, and in our case we only want to search for pictures files, so instead we use the right arrow key to select File Opt and press Enter. PhotoRec can recover many different types of files, and deselecting each one would take a long time. Instead, we press “s” to clear all of the selections, and then find the appropriate file types – jpg, gif, and png – and select them by pressing the right arrow key. Once we’ve selected these three, we press “b” to save these selections. Press enter to return to the list of hard drive partitions. We want to search both of our partitions, so we highlight “No partition” and “Search” and then press Enter. PhotoRec prompts for a location to store the recovered files. If you have a different healthy hard drive, then we recommend storing the recovered files there. Since we’re not recovering very much, we’ll store it on the Ubuntu Live CD’s desktop. Note: Do not recover files to the hard drive you’re recovering from. PhotoRec is able to recover the 20 pictures from the partitions on our hard drive! A quick look in the recup_dir.1 directory that it creates confirms that PhotoRec has recovered all of our pictures, save for the file names. Foremost Foremost is a command-line program with no interactive interface like PhotoRec, but offers a number of command-line options to get as much data out of your had drive as possible. For a full list of options that can be tweaked via the command line, open up a terminal (Applications > Accessories > Terminal) and type in: foremost –h In our case, the command line options that we are going to use are: -t, a comma-separated list of types of files to search for. In our case, this is “jpeg,png,gif”. -v, enabling verbose-mode, giving us more information about what foremost is doing. -o, the output folder to store recovered files in. In our case, we created a directory called “foremost” on the desktop. -i, the input that will be searched for files. This can be a disk image in several different formats; however, we will use a hard disk, /dev/sda. Our foremost invocation is: sudo foremost –t jpeg,png,gif –o foremost –v –i /dev/sda Your invocation will differ depending on what you’re searching for and where you’re searching for it. Foremost is able to recover 17 of the 20 files stored on the hard drive. Looking at the files, we can confirm that these files were recovered relatively well, though we can see some errors in the thumbnail for 00622449.jpg. Part of this may be due to the ext2 filesystem. Foremost recommends using the –d command-line option for Linux file systems like ext2. We’ll run foremost again, adding the –d command-line option to our foremost invocation: sudo foremost –t jpeg,png,gif –d –o foremost –v –i /dev/sda This time, foremost is able to recover all 20 images! A final look at the pictures reveals that the pictures were recovered with no problems. Scalpel Scalpel is another powerful program that, like Foremost, is heavily configurable. Unlike Foremost, Scalpel requires you to edit a configuration file before attempting any data recovery. Any text editor will do, but we’ll use gedit to change the configuration file. In a terminal window (Applications > Accessories > Terminal), type in: sudo gedit /etc/scalpel/scalpel.conf scalpel.conf contains information about a number of different file types. Scroll through this file and uncomment lines that start with a file type that you want to recover (i.e. remove the “#” character at the start of those lines). Save the file and close it. Return to the terminal window. Scalpel also has a ton of command-line options that can help you search quickly and effectively; however, we’ll just define the input device (/dev/sda) and the output folder (a folder called “scalpel” that we created on the desktop). Our invocation is: sudo scalpel /dev/sda –o scalpel Scalpel is able to recover 18 of our 20 files. A quick look at the files scalpel recovered reveals that most of our files were recovered successfully, though there were some problems (e.g. 00000012.jpg). Conclusion In our quick toy example, TestDisk was able to recover two deleted partitions, and PhotoRec and Foremost were able to recover all 20 deleted images. Scalpel recovered most of the files, but it’s very likely that playing with the command-line options for scalpel would have enabled us to recover all 20 images. These tools are lifesavers when something goes wrong with your hard drive. If your data is on the hard drive somewhere, then one of these tools will track it down! Similar Articles Productive Geek Tips Recover Deleted Files on an NTFS Hard Drive from a Ubuntu Live CDUse an Ubuntu Live CD to Securely Wipe Your PC’s Hard DriveReset Your Ubuntu Password Easily from the Live CDBackup Your Windows Live Writer SettingsAdding extra Repositories on Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job? Find Downloads and Add-ins for Outlook Recycle !

    Read the article

  • Laissez les bon temps rouler! (Microsoft BI Conference 2010)

    - by smisner
    Laissez les bons temps rouler" is a Cajun phrase that I heard frequently when I lived in New Orleans in the mid-1990s. It means "Let the good times roll!" and encapsulates a feeling of happy expectation. As I met with many of my peers and new acquaintances at the Microsoft BI Conference last week, this phrase kept running through my mind as people spoke about their plans in their respective businesses, the benefits and opportunities that the recent releases in the BI stack are providing, and their expectations about the future of the BI stack.Notwithstanding some jabs here and there to point out the platform is neither perfect now nor will be anytime soon (along with admissions that the competitors are also not perfect), and notwithstanding several missteps by the event organizers (which I don't care to enumerate), the overarching mood at the conference was positive. It was a refreshing change from the doom and gloom hovering over several conferences that I attended in 2009. Although many people expect economic hardships to continue over the coming year or so, everyone I know in the BI field is busier than ever and expects to stay busy for quite a while.Self-Service BISelf-service was definitely a theme of the BI conference. In the keynote, Ted Kummert opened with a look back to a fairy tale vision of self-service BI that he told in 2008. At that time, the fairy tale future was a time when "every end user was able to use BI technologies within their job in order to move forward more effectively" and transitioned to the present time in which SQL Server 2008 R2, Office 2010, and SharePoint 2010 are available to deliver managed self-service BI.This set of technologies is presumably poised to address the needs of the 80% of users that Kummert said do not use BI today. He proceeded to outline a series of activities that users ought to be able to do themselves--from simple changes to a report like formatting or an addtional data visualization to integration of an additional data source. The keynote then continued with a series of demonstrations of both current and future technology in support of self-service BI. Some highlights that interested me:PowerPivot, of course, is the flagship product for self-service BI in the Microsoft BI stack. In the TechEd keynote, which was open to the BI conference attendees, Amir Netz (twitter) impressed the audience by demonstrating interactivity with a workbook containing 100 million rows. He upped the ante at the BI keynote with his demonstration of a future-state PowerPivot workbook containing over 2 billion records. It's important to note that this volume of data is being processed by a server engine, and not in the PowerPivot client engine. (Yes, I think it's impressive, but none of my clients are typically wrangling with 2 billion records at a time. Maybe they're thinking too small. This ability to work quickly with large data sets has greater implications for BI solutions than for self-service BI, in my opinion.)Amir also demonstrated KPIs for the future PowerPivot, which appeared to be easier to implement than in any other Microsoft product that supports KPIs, apart from simple KPIs in SharePoint. (My initial reaction is that we have one more place to build KPIs. Great. It's confusing enough. I haven't seen how well those KPIs integrate with other BI tools, which will be important for adoption.)One more PowerPivot feature that Amir showed was a graphical display of the lineage for calculations. (This is hugely practical, especially if you build up calculations incrementally. You can more easily follow the logic from calculation to calculation. Furthermore, if you need to make a change to one calculation, you can assess the impact on other calculations.)Another product demonstration will be available within the next 30 days--Pivot for Reporting Services. If you haven't seen this technology yet, check it out at www.getpivot.com. (It definitely has a wow factor, but I'm skeptical about its practicality. However, I'm looking forward to trying it out with data that I understand.)Michael Tejedor (twitter) demonstrated a feature that I think is really interesting and not emphasized nearly enough--overshadowed by PowerPivot, no doubt. That feature is the Microsoft Business Intelligence Indexing Connector, which enables search of the content of Excel workbooks and Reporting Services reports. (This capability existed in MOSS 2007, but was more cumbersome to implement. The search results in SharePoint 2010 are not only cooler, but more useful by describing whether the content is found in a table or a chart, for example.)This may yet be the dawning of the age of self-service BI - a phrase I've heard repeated from time to time over the last decade - but I think BI professionals are likely to stay busy for a long while, and need not start looking for a new line of work. Kummert repeatedly referenced strategic BI solutions in contrast to self-service BI to emphasize that self-service BI is not a replacement for the services that BI professionals provide. After all, self-service BI does not appear magically on user desktops (or whatever device they want to use). A supporting infrastructure is necessary, and grows in complexity in proportion to the need to simplify BI for users.It's one thing to hear the party line touted by Microsoft employees at the BI keynote, but it's another to hear from the people who are responsible for implementing and supporting it within an organization. Rob Collie (blog | twitter), Kasper de Jonge (blog | twitter), Vidas Matelis (site | twitter), and I were invited to join Andrew Brust (blog | twitter) as he led a Birds of a Feather session at TechEd entitled "PowerPivot: Is It the BI Deal-Changer for Developers and IT Pros?" I would single out the prevailing concern in this session as the issue of control. On one side of this issue were those who were concerned that they would lose control once PowerPivot is implemented. On the other side were those who believed that data should be freely accessible to users in PowerPivot, and even acknowledgment that users would get the data they want even if it meant they would have to manually enter into a workbook to have it ready for analysis. For another viewpoint on how PowerPivot played out at the conference, see Rob Collie's observations.Collaborative BII have been intrigued by the notion of collaborative BI for a very long time. Before I discovered BI, I was a Lotus Notes developer and later a manager of developers, working in a software company that enabled collaboration in the legal industry. Not only did I help create collaborative systems for our clients, I created a complete project management from the ground up to collaboratively manage our custom development work. In that case, collaboration involved my team, my client contacts, and me. I was also able to produce my own BI from that system as well, but didn't know that's what I was doing at the time. Only in recent years has SharePoint begun to catch up with the capabilities that I had with Lotus Notes more than a decade ago. Eventually, I had the opportunity at that job to formally investigate BI as another product offering for our software, and the rest - as they say - is history. I built my first data warehouse with Scott Cameron (who has also ventured into the authoring world by writing Analysis Services 2008 Step by Step and was at the BI Conference last week where I got to reminisce with him for a bit) and that began a career that I never imagined at the time.Fast forward to 2010, and I'm still lauding the virtues of collaborative BI, if only the tools will catch up to my vision! Thus, I was anxious to see what Donald Farmer (blog | twitter) and Rita Sallam of Gartner had to say on the subject in their session "Collaborative Decision Making." As I suspected, the tools aren't quite there yet, but the vendors are moving in the right direction. One thing I liked about this session was a non-Microsoft perspective of the state of the industry with regard to collaborative BI. In addition, this session included a better demonstration of SharePoint collaborative BI capabilities than appeared in the BI keynote. Check out the video in the link to the session to see the demonstration. One of the use cases that was demonstrated was linking from information to a person, because, as Donald put it, "People don't trust data, they trust people."The Microsoft BI Stack in GeneralA question I hear all the time from students when I'm teaching is how to know what tools to use when there is overlap between products in the BI stack. I've never taken the time to codify my thoughts on the subject, but saw that my friend Dan Bulos provided good insight on this topic from a variety of perspectives in his session, "So Many BI Tools, So Little Time." I thought one of his best points was that ideally you should be able to design in your tool of choice, and then deploy to your tool of choice. Unfortunately, the ideal is yet to become real across the platform. The closest we come is with the RDL in Reporting Services which can be produced from two different tools (Report Builder or Business Intelligence Development Studio's Report Designer), manually, or by a third-party or custom application. I have touted the idea for years (and publicly said so about 5 years ago) that eventually more products would be RDL producers or consumers, but we aren't there yet. Maybe in another 5 years.Another interesting session that covered the BI stack against a backdrop of competitive products was delivered by Andrew Brust. Andrew did a marvelous job of consolidating a lot of information in a way that clearly communicated how various vendors' offerings compared to the Microsoft BI stack. He also made a particularly compelling argument about how the existence of an ecosystem around the Microsoft BI stack provided innovation and opportunities lacking for other vendors. Check out his presentation, "How Does the Microsoft BI Stack...Stack Up?"Expo HallI had planned to spend more time in the Expo Hall to see who was doing new things with the BI stack, but didn't manage to get very far. Each time I set out on an exploratory mission, I got caught up in some fascinating conversations with one or more of my peers. I find interacting with people that I meet at conferences just as important as attending sessions to learn something new. There were a couple of items that really caught me eye, however, that I'll share here.Pragmatic Works. Whether you develop SSIS packages, build SSAS cubes, or author SSRS reports (or all of the above), you really must take a look at BI Documenter. Brian Knight (twitter) walked me through the key features, and I must say I was impressed. Once you've seen what this product can do, you won't want to document your BI projects any other way. You can download a free single-user database edition, or choose from more feature-rich standard or professional editions.Microsoft Press ebooks. I also stopped by the O'Reilly Media booth to meet some folks that one of my acquisitions editors at Microsoft Press recommended. In case you haven't heard, Microsoft Press has partnered with O'Reilly Media for distribution and publishing. Apart from my interest in learning more about O'Reilly Media as an author, an advertisement in their booth caught me eye which I think is a really great move. When you buy Microsoft Press ebooks through the O'Reilly web site, you can receive it in any (or all) of the following formats where possible: PDF, epub, .mobi for Kindle and .apk for Android. You also have lifetime DRM-free access to the ebooks. As someone who is an avid collector of books, I fnd myself running out of room for storage. In addition, I travel a lot, and it's hard to lug my reference library with me. Today's e-reader options make the move to digital books a more viable way to grow my library. Having a variety of formats means I am not limited to a single device, and lifetime access means I don't have to worry about keeping track of where I've stored my files. Because the e-books are DRM-free, I can copy and paste when I'm compiling notes, and I can print pages when necessary. That's a winning combination in my mind!Overall, I was pleased with the BI conference. There were many more sessions that I couldn't attend, either because the room was full when I got there or there were multiple sessions running concurrently that I wanted to see. Fortunately, many of the sessions are accessible for viewing online at http://www.msteched.com/2010/NorthAmerica along with the TechEd sessions. You can spot the BI sessions by the yellow skyline on the title slide of the presentation as shown below. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Why does vbkeyup produce different results than vbkeydown does in this Code.

    - by Joshua Rhoads
    I have a VB6 app. It consists of a flexgrid. I have code to allow the user to press the up or down arrow key to switch rows in the grid. When the down arrow key is pressed the cursor is placed at the end of the text in the next row, but when the Up arrow key is pressed the cursor is placed in the middle of the text of the previous row. Anybody have any explantion for this. Private Sub Command1_Click() With MSFlexGrid1 .Cols = 4 .Rows = 5 .FixedCols = 1 .FixedRows = 1 MSFlexGrid1.TextMatrix(0, 1) = "FROM" MSFlexGrid1.TextMatrix(0, 2) = "THRU" MSFlexGrid1.TextMatrix(0, 3) = "PAGE" MSFlexGrid1.TextMatrix(1, 1) = "Aa" MSFlexGrid1.TextMatrix(1, 2) = "Az" MSFlexGrid1.TextMatrix(1, 3) = "-" MSFlexGrid1.TextMatrix(2, 1) = "Ba" MSFlexGrid1.TextMatrix(2, 2) = "Bz" MSFlexGrid1.TextMatrix(2, 3) = "-" MSFlexGrid1.TextMatrix(3, 1) = "Ca" MSFlexGrid1.TextMatrix(3, 2) = "Cz" MSFlexGrid1.TextMatrix(3, 3) = "-" MSFlexGrid1.TextMatrix(4, 1) = "Da" MSFlexGrid1.TextMatrix(4, 2) = "Dz" MSFlexGrid1.TextMatrix(4, 3) = "-" End With End Sub Private Sub Command2_Click() End End Sub Private Sub Form_Load() Text1.Visible = False End Sub Private Sub MSFlexGrid1_DblClick() FlexGridEdit Asc(" ") Exit Sub End Sub Private Sub FlexGridEdit(KeyAscii As Integer) Text1.Left = MSFlexGrid1.CellLeft + MSFlexGrid1.Left Text1.Top = MSFlexGrid1.CellTop + MSFlexGrid1.Top Text1.Width = MSFlexGrid1.ColWidth(MSFlexGrid1.Col) - 2 * (MSFlexGrid1.ColWidth (MSFlexGrid1.Col) - MSFlexGrid1.CellWidth) Text1.Height = MSFlexGrid1.RowHeight(MSFlexGrid1.Row) - 2 * (MSFlexGrid1.RowHeight(MSFlexGrid1.Row) - MSFlexGrid1.CellHeight) Text1.MaxLength = 2 Text1.Visible = True Text1.SetFocus Select Case KeyAscii Case 0 To Asc(" ") Text1.Text = MSFlexGrid1.Text Text1.SelStart = Len(Text1.Text) Case Else Text1.Text = Chr$(KeyAscii) Text1.SelStart = 1 End Select Exit Sub End Sub Function ValidateFlexGrid1() As Boolean Dim llCntrRow As Integer Dim llCntrCol As Integer Dim max_len As Single Dim new_len As Single Dim liCntr As Integer Dim lsCheck As String With MSFlexGrid1 If Text1.Visible Then .Text = Text1.Text If .Rows = .FixedRows Then ValidateFlexGrid1 = False End If For llCntrCol = .FixedCols To MSFlexGrid1.Cols - 1 For llCntrRow = .FixedRows To MSFlexGrid1.Rows - 1 If .TextMatrix(llCntrRow, llCntrCol) = "" Then ValidateFlexGrid1 = False Exit Function End If Next llCntrRow Next llCntrCol End With ValidateFlexGrid1 = True Exit Function End Function Private Sub Text1_Keydown(KeyCode As Integer, Shift As Integer) Select Case KeyCode Case vbKeyRight, vbKeyLeft, vbKeyReturn If Text1.Visible = True Then If Text1.Text = "/" Then If MSFlexGrid1.Row > 1 Then Text1.Text = MSFlexGrid1.TextMatrix(MSFlexGrid1.Row - 1, MSFlexGrid1.Col) Text1.SelStart = Len(Text1.Text) End If End If MSFlexGrid1.Text = Text1.Text If KeyCode = vbKeyRight Or KeyCode = vbKeyReturn Then If Text1.SelStart = Len(Text1.Text) Then FlexGridChkPos KeyCode FlexGridEdit Asc(" ") End If Else If Text1.SelStart = 0 Then FlexGridChkPos KeyCode FlexGridEdit Asc(" ") End If End If End If Case vbKeyDown, vbKeyUp If Text1.Visible = True Then MSFlexGrid1.Text = Text1.Text FlexGridChkPos KeyCode FlexGridEdit Asc(" ") End If End Select Exit Sub End Sub Function FlexGridChkPos(KeyCode As Integer) As Boolean Dim llNextRow As Long Dim llNextCol As Long Dim llCurrCol As Long Dim llCurrRow As Long Dim llTotCols As Long Dim llTotRows As Long Dim llBegRow As Long Dim llBegCol As Long Dim llCntrCol As Long Dim lsText As String With MSFlexGrid1 llCurrRow = .Row + 1 llCurrCol = .Col + 1 llTotRows = .Rows llTotCols = .Cols llBegRow = .FixedRows llBegCol = .FixedCols If KeyCode = vbKeyRight Or KeyCode = vbKeyReturn Then llNextCol = llCurrCol + 1 If llNextCol > llTotCols Then llNextRow = llCurrRow + 1 If llNextRow > llTotRows Then .Rows = .Rows + 1 llCurrRow = llCurrRow + 1 llCurrCol = 1 + llBegCol Else llCurrRow = llNextRow llCurrCol = 1 + llBegCol End If Else llCurrCol = llNextCol End If End If If KeyCode = vbKeyLeft Then llNextCol = llCurrCol - 1 If llNextCol = llBegCol Then llNextRow = llCurrRow - 1 If llNextRow = llBegRow Then llCurrRow = llTotRows Else llCurrRow = llNextRow End If llCurrCol = llTotCols Else llCurrCol = llNextCol End If End If If KeyCode = vbKeyUp Then llNextRow = llCurrRow - 1 If llNextRow = llBegRow Then llCurrRow = llTotRows Else llCurrRow = llNextRow End If End If If KeyCode = vbKeyDown Then llNextRow = llCurrRow + 1 If llNextRow > llTotRows Then llCurrRow = llBegRow + 1 Else llCurrRow = llNextRow End If End If .Col = llCurrCol - 1 .Row = llCurrRow - 1 End With Exit Function End Function

    Read the article

  • Shrinking image by 57% and centering inside css structure

    - by Johua
    Hy, i'm really stuck. I'll go step by step and hope to make it short. This is the html structure: <li class="FAVwithimage"> <a href=""> <img src="pics/Joshua.png"> <span class="name">Joshua</span> <span class="comment">Developer</span> <span class="arrow"></span> </a> </li> Before i paste the css classes, some info about the exact goal to accomplish: Resize the picture (img) by 57%. If it cannot be done with css, then jquery/javascript solution. For example: Original pic is 240x240px, i need to resize it by 57%. That means that a pic of 400x400 would be bigger after resizing. After resizing, the picture needs to be centered vertical&horizontal inside a: 68x90 boundaries. So you have an LI element, wich has an A element, and inside A we have IMG, IMG is resized by 57% and centered where the maximum width can be of course 68px and maximum height 90px. No for that to work i was adding a SPAN element arround the IMG. This is what i was thinking: <li class="FAVwithimage"> <a href=""> <span class="picHolder"><img src="pics/Joshua.png"></span> <span class="name">Joshua</span> <span class="comment">Developer</span> <span class="arrow"></span> </a> </li> Then i would give the span element: display:block and w=68px, h=90px. But unforunatelly that didn't work. I know it's a long post but i'v did my best to describe it very simple. Beneath are the css classes and a picture to see what i need. li.FAVwithimage { height: 90px!important; } li.FAVwithimage a, li.FAVwithimage:hover a { height: 81px!important; } That's it what's relevant. I have not included the classes for: name,comment,arrow And now the classes that are incomplete and refer to IMG. li.FAVwithimage a span.picHolder{ /*put the picHolder to the beginning of the LI element*/ position: absolute; left: 0; top: 0; width: 68px; height: 90px; diplay:block; border:1px solid #F00; } Border is used just temporary to show the actuall picHolder. It is now on the beginning of LI, width and height is set. li.FAVwithimage span.picHolder img { max-width:68px!important; max-height:90px!important; } This is the class wich should shrink the pic by 57% and center inside picHolder Here I have a drawing describing what i need:

    Read the article

  • Change columns order in bootstrap 3

    - by TNK
    I am trying to change bootstrap columns order in desktop and mobile screen. <div class="container"> <div class="row"> <section class="content-secondary col-sm-4"> <h4>Don't Miss!</h4> <p>Suspendisse et arcu felis, ac gravida turpis. Suspendisse potenti. Ut porta rhoncus ligula, sed fringilla felis feugiat eget. In non purus quis elit iaculis tincidunt. Donec at ultrices est.</p> <p><a class="btn btn-primary pull-right" href="#">Read more <span class="icon fa fa-arrow-circle-right"></span></a></p> <h4>Check it out</h4> <p>Suspendisse et arcu felis, ac gravida turpis. Suspendisse potenti. Ut porta rhoncus ligula, sed fringilla felis feugiat eget. In non purus quis elit iaculis tincidunt. Donec at ultrices est.</p> <p><a class="btn btn-primary pull-right" href="#">Read more <span class="icon fa fa-arrow-circle-right"></span></a></p> <h4>Finally</h4> <p>Suspendisse et arcu felis, ac gravida turpis. Suspendisse potenti. Ut porta rhoncus ligula, sed fringilla felis feugiat eget. In non purus quis elit iaculis tincidunt. Donec at ultrices est.</p> <p><a class="btn btn-primary pull-right" href="#">Read more <span class="icon fa fa-arrow-circle-right"></span></a></p> </section> <section class="content-primary col-sm-8"> <div class="main-content" ........... ........... </div> </section> </div><!-- /.row --> </div><!-- /.container --> View Port = SM columns order should be : |content- secondary | content-primary | View Port < SM columns order should be : | content-primary | |content- secondary | I tried it something like this using 'pulland 'push clases. <div class="row"> <section class="content-secondary col-sm-4 col-sm-push-4"> Content ...... </section> <section class="content-primary col-sm-8 col-sm-pull-8"> Content ...... </section> </div> But this is not working for me. Hope someone will help me out. Thank you.

    Read the article

  • Xubuntu login hangs after Cancel Button click

    - by akester
    I'm running Xubuntu 12.04 (I installed using the alternative installer.) running in Virtaulbox 4.1.20 My issue is with the login screen (lightdm-gtk-greeter). It usually runs just fine, and allows users to log in and out but it will hang if the user presses the cancel button. The interface is still working (ie, shutdown menu is still available, I can switch to a different tty) but the username or password field (depending on when the button is hit) is disabled. Restarting lightdm will reset the screen, but the problem still exists. The issue is only with the cancel button. The login, session, and language buttons/menus as well as the accessibility and shutdown menu appear to work normally. I've modified some of the config files for lighdm-gtk-greeter, specifically /etc/lightdm/lighdm-gtk-greeter.conf to change the background image and /etc/lightdm/lightdm.conf to disable the user list. I did not check to see if the error existed before the changes took place. The changes have been restored the default settings but the problem persists. Here is the output of /var/log/lightdm/lightdm.log when the screen is hung: [+0.00s] DEBUG: Logging to /var/log/lightdm/lightdm.log [+0.00s] DEBUG: Starting Light Display Manager 1.2.1, UID=0 PID=2072 [+0.00s] DEBUG: Loaded configuration from /etc/lightdm/lightdm.conf [+0.00s] DEBUG: Using D-Bus name org.freedesktop.DisplayManager [+0.00s] DEBUG: Registered seat module xlocal [+0.00s] DEBUG: Registered seat module xremote [+0.00s] DEBUG: Adding default seat [+0.00s] DEBUG: Starting seat [+0.00s] DEBUG: Starting new display for greeter [+0.00s] DEBUG: Starting local X display [+0.02s] DEBUG: Using VT 7 [+0.02s] DEBUG: Activating VT 7 [+0.03s] DEBUG: Logging to /var/log/lightdm/x-0.log [+0.04s] DEBUG: Writing X server authority to /var/run/lightdm/root/:0 [+0.04s] DEBUG: Launching X Server [+0.05s] DEBUG: Launching process 2078: /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch [+0.05s] DEBUG: Waiting for ready signal from X server :0 [+0.05s] DEBUG: Acquired bus name org.freedesktop.DisplayManager [+0.05s] DEBUG: Registering seat with bus path /org/freedesktop/DisplayManager/Seat0 [+0.28s] DEBUG: Got signal 10 from process 2078 [+0.28s] DEBUG: Got signal from X server :0 [+0.28s] DEBUG: Connecting to XServer :0 [+0.29s] DEBUG: Starting greeter [+0.29s] DEBUG: Started session 2082 with service 'lightdm', username 'lightdm' [+0.36s] DEBUG: Session 2082 authentication complete with return value 0: Success [+0.36s] DEBUG: Greeter authorized [+0.36s] DEBUG: Logging to /var/log/lightdm/x-0-greeter.log [+0.36s] DEBUG: Session 2082 running command /usr/lib/lightdm/lightdm-greeter-session /usr/sbin/lightdm-gtk-greeter [+0.58s] DEBUG: Greeter connected version=1.2.1 [+0.58s] DEBUG: Greeter connected, display is ready [+0.58s] DEBUG: New display ready, switching to it [+0.58s] DEBUG: Activating VT 7 [+1.04s] DEBUG: Greeter start authentication for andrew [+1.04s] DEBUG: Started session 2137 with service 'lightdm', username 'andrew' [+1.09s] DEBUG: Session 2137 got 1 message(s) from PAM [+1.09s] DEBUG: Prompt greeter with 1 message(s) [+17.24s] DEBUG: Cancel authentication [+17.24s] DEBUG: Session 2137: Sending SIGTERM

    Read the article

  • Should I install software on a "SAN"

    - by am2605
    Hi, I need to set up ColdFusion 9 on a ubuntu server that has a SAN disk mounted. Is it appropriate to install the CF server software on this disk? I don't really understand the ins and outs of what a SAN is, so I am not sure if the intention is for me to solely install web content on it or whether the server software itself should go here too. Any advice would be extremeness welcome. Many thanks, Andrew.

    Read the article

  • 403 Forbidden Error when trying to view localhost on Apache

    - by misbehavens
    I think my Apache must be all screwed up. I don't know if it ever worked. I just upgraded to Snow Leopard, and the first step on this tutorial is to start apache and check that it's working by opening http://localhost. It starts fine but when I go to localhost I get a 403 forbidden error. I don't know where to start figuring out how to fix it, so I wonder if a fresh install of Apache would do the trick. What do you think? Update: I found some error logs in /private/var/log/apache2/. Found this in one of the logs. Not sure what it means: [Tue Nov 10 17:53:08 2009] [notice] caught SIGTERM, shutting down [Tue Nov 10 21:49:17 2009] [warn] Init: Session Cache is not configured [hint: SSLSessionCache] Warning: DocumentRoot [/usr/docs/dummy-host.example.com] does not exist Warning: DocumentRoot [/usr/docs/dummy-host2.example.com] does not exist httpd: Could not reliably determine the server's fully qualified domain name, using Andrews-Mac-Pro.local for ServerName mod_bonjour: Skipping user 'andrew' - cannot read index file '/Users/andrew/Sites/index.html'. [Tue Nov 10 21:49:19 2009] [notice] Digest: generating secret for digest authentication ... [Tue Nov 10 21:49:19 2009] [notice] Digest: done [Tue Nov 10 21:49:19 2009] [notice] Apache/2.2.11 (Unix) mod_ssl/2.2.11 OpenSSL/0.9.8k DAV/2 PHP/5.3.0 configured -- resuming normal operations Update: I also found something in the dummy-host.example.com-error_log file. I didn't set these dummy-host things by the way. Is this the default configuration? [Tue Nov 10 21:59:57 2009] [error] [client ::1] client denied by server configuration: /usr/docs Update: Woohoo! I found the file that had the virtual host definitions. It was in /etc/apache2/extra/httpd-vhosts.conf. It had those two dummy virtual host settings in there. I added a localhost virtual host. Not sure if this is necessary, but since it wasn't working before, decided to do it anyway. After removing the old virtual hosts, adding my new localhost virtual host, and restarting apache, it seems to work. So I guess whenever I want to add a virtual host, I only need to add them to this file? Or is there a hosts file somewhere, like there is on Linux? Update: Yes, there is an /etc/hosts file that need to be changed to. Add the virtual host name to that file.

    Read the article

  • Where does $PATH get set in OS X 10.6 Snow Leopard?

    - by misbehavens
    I type echo $PATH on the command line and get /opt/local/bin:/opt/local/sbin:/Users/andrew/bin:/usr/local/bin:/usr/local/mysql/bin:/usr/local/pear/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin:/opt/local/bin:/usr/local/git/bin I'm wondering where this is getting set since my .bash_login file is empty. I'm particularly concerned that, after installing MacPorts, it installed a bunch of junk in /opt. I don't think that directory even exists in a normal Mac OS X install. Update: Thanks to jtimberman for correcting my echo $PATH statement

    Read the article

  • run growlnotify over ssh

    - by CCG121
    I am trying to run growl notify over ssh like so ssh [email protected] "growlnotify -m test" is run I get bash: growlnotify command not found however running it straight from the mac it runs fine am I missing something simple or is there a really complex reason this won't work? ps. ssh keys are enabled in both directions edit: I logged into the mac from a remote machine over ssh and tried to run it and it ran fine so it seems to just affect the one line login and run way and DrC tried that cated the .profile to .bashrc

    Read the article

  • Silverlight Cream for December 11, 2010 -- #1007

    - by Dave Campbell
    In this Issue: Mike Wolf, Colin Eberhardt, Mike Snow(-2-, -3-), David Kelley(-2-, -3-), Jesse Liberty(-2-), Erik Mork, Jeff Blankenburg, Laurent Duveau, and Jeremy Likness(-2-). Above the Fold: Silverlight: "The definitive guide to Notification Window in Silverlight 4" Laurent Duveau WP7: "Making the MS Adcontrol REALLY work on phone 7" David Kelley Silverlight 5: "Silverlight 5: In the Trenches" Mike Wolf From SilverlightCream.com: Silverlight 5: In the Trenches How many people can discuss Silverlight 5 'In the Trenches' ... apparently Mike Wolf can, and that's just what he's done in the post to whet your whistle (do people say that any more?) for when we can all get our hands on the bits. Visiblox, Visifire, DynamicDataDisplay – Charting Performance Comparison Colin Eberhardt responds to reader requests, and revisits his Charting Performance after also some discussion with David Anson about the Silverlight Toolkit. This time including Dynamic Data Display which is quite impressive in the ratings... check out the post and the code. Win7 Mobile Back Arrow Key Interception The simple fact is heavy bloggers rise, like Cream, to the top of my list, and I've been missing some goodness from Mike Snow... he's blogging WP7 stuff now... first up of the 'missed' ones is this one on intercepting the Back Arrow Key. Animating the Color of an Object Switching back to Silverlight in general, Mike Snow's next post is on Animating color of an object, such as text foreground. Tombstoning on the Win7 Mobile Platform And now back to WP7, Mike Snow is discussing Tombstoning... discussing the various aspects of it, and some code to use, if you haven't gotten your head around this one yet. What I tell Designers to give me... Integrating and Digital Zen David Kelley has a post up describing what he needs from designers to get his job done... I heard him discussing this at the Firestarter, and didn't realize he had written it up... these 8 items are things learned by doing, and should be discussed with your designers. Making the MS Adcontrol REALLY work on phone 7 David Kelley also has a post up discussing how to really get the Ad control working on WP7 apps... since I've seen lots of posts about this, having a definitive explanation from someone that's doing it is a good thing. Performance Optimization on Phone 7 In a break from his norm of discussing UX, David Kelley is talking about performance on WP7 devices in this post. Windows Phone From Scratch #10 – Visual State Part 2 When I saw Jesse Liberty's latest post, I realized I had missed his Part 2 of VSM for WP7 ... don't you miss it... this completes the good stuff from number 9 :) Windows Phone From Scratch #11 – Behaviors Jesse Liberty's latest Windows Phone from Scratch is up... and he's talking about Behaviors this time out... more of an overview or introduction to behaviors, but all good Show 112: Scott Guthrie on Silverlight 5 Erik Mork's latest Sparkling Client podcast is up and he was able to get some time with Scott Guthrie at the Firestarter. What I Learned in WP7 – Issue #1 Jeff Blankenburg decided to do another series, only this one isn't promised as every day... it's "What I Learned in WP7" ... and the first is up... good interesting bits found surrounding the WP7 device. The definitive guide to Notification Window in Silverlight 4 Laurent Duveau has a great post up that will have you doing Silverlight 'toast' notifications in no time... good descriptions and source. Lessons Learned in Personal Web Page Part 1: Dynamic XAML Jeremy Likness has rebuilt his personal website in Silverlight and is sharing some of that experience on his blog. This first post discusses the dynamic content. He used Jounce, of course, and included the Silverlight Navigation Framework, and... you can download all the source Lessons Learned in Personal Web Page Part 2: Enter the Matrix Jeremy Likness's second post about building his website is all about the 'Matrix' page ... pretty cool stuff... check it out... I think it looks great Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Creating and using VM Groups in VirtualBox

    - by Fat Bloke
    With VirtualBox 4.2 we introduced the Groups feature which allows you to organize and manage your guest virtual machines collectively, rather than individually. Groups are quite a powerful concept and there are a few nice features you may not have discovered yet, so here's a bit more information about groups, and how they can be used.... Creating a group Groups are just ad hoc collections of virtual machines and there are several ways of creating a group: In the VirtualBox Manager GUI: Drag one VM onto another to create a group of those 2 VMs. You can then drag and drop more VMs into that group; Select multiple VMs (using Ctrl or Shift and click) then  select the menu: Machine...Group; or   press Cmd+U (Mac), or Ctrl+U(Windows); or right-click the multiple selection and choose Group, like this: From the command line: Group membership is an attribute of the vm so you can modify the vm to belong in a group. For example, to put the vm "Ubuntu" into the group "TestGroup" run this command: VBoxManage modifyvm "Ubuntu" --groups "/TestGroup" Deleting a Group Groups can be deleted by removing a group attribute from all the VMs that constitute that group. To do this via the command-line the syntax is: VBoxManage modifyvm "Ubuntu" --groups "" In the VirtualBox Manager, this is more easily done by right-clicking on a group header and selecting "Ungroup", like this: Multiple Groups Now that we understand that Groups are just attributes of VMs, it can be seen that VMs can exist in multiple groups, for example, doing this: VBoxManage modifyvm "Ubuntu" --groups "/TestGroup","/ProjectX","/ProjectY" Results in: Or via the VirtualBox Manager, you can drag VMs while pressing the Alt key (Mac) or Ctrl (other platforms). Nested Groups Just like you can drag VMs around in the VirtualBox Manager, you can also drag whole groups around. And dropping a group within a group creates a nested group. Via the command-line, nested groups are specified using a path-like syntax, like this: VBoxManage modifyvm "Ubuntu" --groups "/TestGroup/Linux" ...which creates a sub-group and puts the VM in it. Navigating Groups In the VirtualBox Manager, Groups can be collapsed and expanded by clicking on the carat to the left in the Group Header. But you can also Enter and Leave groups too, either by using the right-arrow/left-arrow keys, or by clicking on the carat on the right hand side of the Group Header, like this: . ..leading to a view of just the Group contents. You can Leave or return to the parent in the same way. Don't worry if you are imprecise with your clicking, you can use a double click on the entire right half of the Group Header to Enter a group, and the left half to Leave a group. Double-clicking on the left half when you're at the top will roll-up or collapse the group.   Group Operations The real power of Groups is not simply in arranging them prettily in the Manager. Rather it is about performing collective operations on them, once you have grouped them appropriately. For example, let's say that you are working on a project (Project X) where you have a solution stack of: Database VM, Middleware/App VM, and  a couple of client VMs which you use to test your app. With VM Groups you can start the whole stack with one operation. Select the Group Header, and choose Start: The full list of operations that may be performed on Groups are: Start Starts from any state (boot or resume) Start VMs in headless mode (hold Shift while starting) Pause Reset Close Save state Send Shutdown signal Poweroff Discard saved state Show in filesystem Sort Conclusion Hopefully we've shown that the introduction of VM Groups not only makes Oracle VM VirtualBox pretty, but pretty powerful too.  - FB 

    Read the article

  • glutPostRedisplay() does not update display

    - by A D
    I am currently drawing a rectangle to the screen and would like to move it by using the arrow keys. However, when I press an arrow key the vertex data changes but the display does refresh to reflect these changes, even though I am calling glutPostRedisplay(). Is there something else that I must do? My code: #include <GL/glew.h> #include <GL/freeglut.h> #include <GL/freeglut_ext.h> #include <iostream> #include "Shaders.h" using namespace std; const int NUM_VERTICES = 6; const GLfloat POS_Y = -0.1; const GLfloat NEG_Y = -0.01; struct Vertex { GLfloat x; GLfloat y; Vertex() : x(0), y(0) {} Vertex(GLfloat givenX, GLfloat givenY) : x(givenX), y(givenY) {} }; Vertex left_paddle[NUM_VERTICES]; void init() { glClearColor(1.0f, 1.0f, 1.0f, 0.0f); left_paddle[0] = Vertex(-0.95f, 0.95f); left_paddle[1] = Vertex(-0.95f, 0.0f); left_paddle[2] = Vertex(-0.85f, 0.95f); left_paddle[3] = Vertex(-0.85f, 0.95f); left_paddle[4] = Vertex(-0.95f, 0.0f); left_paddle[5] = Vertex(-0.85f, 0.0f); GLuint vao; glGenVertexArrays( 1, &vao ); glBindVertexArray( vao ); GLuint buffer; glGenBuffers(1, &buffer); glBindBuffer(GL_ARRAY_BUFFER, buffer); glBufferData(GL_ARRAY_BUFFER, sizeof(left_paddle), NULL, GL_STATIC_DRAW); GLuint program = init_shaders( "vshader.glsl", "fshader.glsl" ); glUseProgram( program ); GLuint loc = glGetAttribLocation( program, "vPosition" ); glEnableVertexAttribArray( loc ); glVertexAttribPointer( loc, 2, GL_FLOAT, GL_FALSE, 0, 0); glBindVertexArray(vao); } void movePaddle(Vertex* array, GLfloat change) { for(int i = 0; i < NUM_VERTICES; i++) { array[i].y = array[i].y + change; } glutPostRedisplay(); } void special( int key, int x, int y ) { switch ( key ) { case GLUT_KEY_DOWN: movePaddle(left_paddle, NEG_Y); break; } } void display() { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glDrawArrays(GL_TRIANGLES, 0, 6); glutSwapBuffers(); } int main(int argc, char **argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB); glutInitWindowSize(500,500); glutCreateWindow("Rectangle"); glewInit(); init(); glutDisplayFunc(display); glutSpecialFunc(special); glutMainLoop(); return 0; }

    Read the article

  • Oracle for PCI-DSS Security Webcast

    - by Alex Blyth
    Thanks to everyone who attended the Oracle for PCI-DSS security webcast today. It was good to see how the products we talked about last week can be used to address the PCI standard requirements. A big thanks to Chris Pickett for presenting a great session and running us through a very cool demo showing how the data is protected through out its life. The replay of the session can be downloaded here. Slides and be down loaded here. Oracle for PCI-DSS Security Compliance View more presentationsfrom Oracle Australia. Next week we resume our regular schedule with Andrew Clarke taking us through Oracle Application Express (APEX) - one of the best kept secrets in the Oracle Database. Enroll for this session here (and now :) ) Till next week Cheers Alex

    Read the article

  • Oracle : « Le Cloud reprend le meilleur des Mainframes » et en corrige les défauts, à condition qu'il s'appuie sur des standards ouverts

    Oracle : « Le Cloud reprend le meilleur des Mainframes » Et en corrige les défauts, à condition qu'il s'appuie sur des standards ouverts Il y a environ 7 ans, Oracle a entamé un virage stratégique. Son but était de simplifier les déploiements et les architectures IT. Aujourd'hui, l'éditeur aux multiples casquettes (BI, BPM, Hardware, SGBD, Java, etc.) est en train d'en faire un deuxième. Celui du Cloud . Et toujours sous le signe de la simplification. « Le meilleur Cloud sera complètement transparent pour les utilisateurs », prédit Andrew Sutherland, le cordial (et écossais) Senior Vice-Président Fusion Middleware Europe, de passage ce matin à Paris. Sous-entendu, t...

    Read the article

  • Learn how Oracle storage efficiencies can help your budget

    - by jenny.gelhausen
    Mark Your Calendar! Live Webcast: Next Generation Storage Management Solutions Wednesday, March 24th, 2010 at 9:00am PT or your local time Please plan to join us for this webcast where Forrester senior analyst Andrew Reichman will discuss the pillars of storage efficiency, how to measure and improve it, and how this can help your business immediately alleviate budget pressures. Joining Mr. Reichman are Phil Stephenson, Senior Principal Product Manager at Oracle, and Matthew Baier, Oracle Product Director, who will explain to you the next generation storage capabilities available in Oracle Database 11g and Oracle Exadata. Register for this March 24th live wecast today! var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Smarter, Faster, Cheaper: The Insurance Industry’s Dream

    - by Jenna Danko
    On June 3rd, I saw the Gaylord Resort Centre in Washington D.C. become the hub of C level executives and managers of insurance carriers for the IASA 2013 Conference.  Insurance Accounting/Regulation and Technology sessions took the focus, but there were plenty of tertiary sessions for career development, which complemented the overall strong networking side of the conference.  As an exhibitor, Oracle, along with several hundred other product providers, welcomed the opportunity to display and demonstrate our solutions and we were encouraged by hustle and bustle of the exhibition floor.  The IASA organizers had pre-arranged fast track tours whereby interested conference delegates could sign up for a series of like-themed presentations from Vendors, giving them a level of 'Speed Dating' introductions to possible solutions and services.  Oracle participated in a number of these, which were very well subscribed.  Clearly, the conference had a strong business focus; however, attendees saw technology as a key enabler to get their processes done smarter, faster and cheaper.  As we navigated through the exhibition, it became clear from the inquiries that came to us that insurance carriers are gravitating to a number of focus areas: Navigating the maze of upcoming regulatory reporting changes. For US carriers with European holdings, Solvency II carries a myriad of rules and reporting requirements. Alignment across the globe of the Own Risk and Solvency Assessment (ORSA) processes brings to the fore the National Insurance of Insurance commissioners' (NAIC) recent guidance manual publication. Doing more with less and to certainly expect more from technology for less dollars. The overall cost of IT, in particular hardware, has dropped in real terms (though the appetite for more has risen: more CPU, more RAM, more storage), but software has seen less change. Clearly, customers expect either to pay less or get a lot more from their software solutions for the same buck. Doing things smarter – A recognition that with the advance of technology to stand still no longer means you are technically going backwards. Technology and, in particular technology interactions with human business processes, has undergone incredible change over the past 5 years. Consumer usage (iPhones, etc.) has been at the forefront, but now at the Enterprise level ever more effective technology exploitation is beginning to take place. That data and, in particular gleaning knowledge from data, is refining and improving business processes.  Organizations are now consuming more data than ever before, and it is set to grow exponentially for some time to come.  Amassing large volumes of data is one thing, but effectively analyzing that data is another.  It is the results of such analysis that leads to improvements both in terms of insurance product offerings and the processes to support them. Regulatory Compliance, damned if you do and damned if you don’t! Clearly, around the globe at lot is changing from a regulatory perspective and it is evident that in terms of regulatory requirements, whilst there is a greater convergence across jurisdictions bringing uniformity, there is also a lot of work to be done in the next 5 years. Just like the big data, hidden behind effective regulatory compliance there often lies golden nuggets that can give competitive advantages. From Oracle's perspective, our Rating Engine, Billing, Document Management and Insurance Analytics solutions on display served to strike up good conversations and, as is always the case at conferences, it was a great opportunity to meet and speak with existing Oracle customers that we might not have otherwise caught up with for a while. Fortunately, I was able to catch up on a few sessions at the close of the Exhibition.  The speaker quality was high and the audience asked challenging, but pertinent, questions.  During Dr. Jackie Freiberg’s keynote “Bye Bye Business as Usual,” the author discussed 8 strategies to help leaders create a culture where teams consistently deliver innovative ideas by disrupting the status quo.  The very first strategy: Get wired for innovation.  Freiberg admitted that folks in the insurance and financial services industry understand and know innovation is important, but oftentimes they are slow adopters.  Today, technology and innovation go hand in hand. In speaking to delegates during and after the conference, a high degree of satisfaction could be measured from their positive comments of speaker sessions and the exhibitors. I suspect many will be back in 2014 with Indianapolis as the conference location. Did you attend the IASA Conference in Washington D.C.?  If so, I would love to hear your comments. Andrew Collins is the Director, Solvency II of Oracle Financial Services. He can be reached at andrew.collins AT oracle.com.

    Read the article

  • It's Official, I'm a Geek

    - by andyleonard
    I'm honored to join Glen Gordon ( Blog - @glengordon ) and G. Andrew Duthie ( Blog - @devhammer ) today at 3:00 PM EDT for an MSDN Webcast entitled GeekSpeak: Inside SQL Server Integration Services (SSIS). This is a LiveMeeting and you can join in the fun as an attendee here . It's a live show, so bring your questions! :{> Andy Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!...(read more)

    Read the article

  • Speaking at SQLRelay. Will you be there?

    - by jamiet
    SQL Relay (#sqlrelay) is fast approaching and I wanted to take this opportunity to tell you a little about it.SQL Relay is a 5-day tour around the UK that is taking in five Server Server user groups, each one comprising a full day of SQL Server related learnings. The dates and venues are:21st May, Edinburgh22nd May, Manchester23rd May, Birmingham24th May, Bristol30th May, LondonClick on the appropriate link to see the full agenda and to book your spot.SQL Relay features some of this country's most prominent SQL Server speakers including Chris Webb, Tony Rogerson, Andrew Fryer, Martin Bell, Allan Mitchell, Steve Shaw, Gordon Meyer, Satya Jayanty, Chris Testa O'Neill, Duncan Sutcliffe, Rob Carrol, me and SQL Server UK Product Manager Morris Novello so I really encourage you to go - you have my word it'll be an informative and, more importantly, enjoyable day out from your regular 9-to-5.I am presenting my session "A Lap Around the SSIS Catalog" at Edinburgh and Manchester so if you're going, I hope to see you there.@Jamiet

    Read the article

  • NY Coherence SIG, June 3

    - by ruma.sanyal
    The New York Coherence SIG is hosting its eighth meeting. Since its inception in August 2008, over 85 different companies have attended NYCSIG meetings, with over 375 individual members. Whether you're an experienced Coherence user or new to Data Grid technology, the NYCSIG is the community for realizing Coherence-related projects and best practices. Date: Thursday, June 3, 2010 Time: 5:30pm - 8:00pm ET Where: Oracle Office, Room 30076, 520 Madison Avenue, 30th Floor, NY The new book by Aleksander Seovic "Oracle Coherence 3.5" will be raffled! Presentations:? "Performance Management of Coherence Applications" - Randy Stafford, Consulting Solutions Architect (Oracle) "Best practices for monitoring your Coherence application during the SDLC" - Ivan Ho, Co-founder and EVP of Development (Evident Software) "Coherence Cluster-side Programming" - Andrew Wilson, Coherence Architect (at a couple of Tier-1 Banks in London) Please Register! Registration is required for building security.

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >