Search Results

Search found 1485 results on 60 pages for 'formats'.

Page 32/60 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Windows 7 and Ubuntu Boot/Corruption Problems

    - by Kiraisuki
    I searched around, but I couldn't find the answer to why Windows 7 Ultimate 64x and Ubuntu 12.04 LTS 64x could'nt live together happily on my Asus G1s-X1 laptop. I had Windows 7 Ultimate 64x installed on the laptop when I bought it (bought it used, it comes with Vista new) and I wanted to try out Ubuntu and see what all the hype about the free OS was. I installed Ubuntu on an external 80GB iomega HDD with Windows 7 on my main drive. They both work fine for about 2-3 weeks, until Ubuntu suddenly is unable to boot. A few days after Ubuntu fails, Windows corrupts majorly (winload.exe, ntkrnlpa.exe, and various others corrupt randomly) and Windows Recovery Environment is completely useless. Booting to a live USB with Ubuntu and trying to reinstall it fails, and trying to wipe the main drive and install it there fails as well (something about my graphics card.) I managed to get Windows 7 Ultimate 64x back up and running (after many disk formats) but now I am left with a broken (and invisible) Ubuntu installation on the external drive. Is there any way to get the broken and non-bootable Ubuntu installation off the HDD without damaging or erasing the many files and programs installed and stored on the 80GB drive?

    Read the article

  • How to use mount points in MilkShape models?

    - by vividos
    I have bought the Warriors & Commoners model pack from Frogames and the pack contains (among other formats) two animated models and several non-animated objects (axe, shield, pilosities, etc.) in MilkShape3D format. I looked at the official "MilkShape 3D Viewer v2.0" (msViewer2.zip at http://www.chumba.ch/chumbalum-soft/ms3d/download.html) source code and implemented loading the model, calculating the joint matrices and everything looks fine. In the model there are several joints that are designated as the "mount points" for the static objects like axe and shield. I now want to "put" the axe into the hand of the animated model, and I couldn't quite figure out how. I put the animated vertices in a VBO that gets updated every frame (I know I should do this with a shader, but I didn't have time to do this yet). I put the static vertices in another VBO that I want to keep static and not updated every frame. I now tried to render the animated vertices first, then use the joint matrix for the "mount joint" to calculate the location of the static object. I tried many things, and what about seems to be right is to transpose the joint matrix, then use glMatrixMult() to transform the modelview matrix. For some objects like the axe this is working, but not for others, e.g. the pilosities. Now my question: How is this generally implemented when using bone/joint models, and especially with MilkShape3D models? Am I on the right track?

    Read the article

  • How to store bitmaps in memory?

    - by Geotarget
    I'm working with general purpose image rendering, and high-performance image processing, and so I need to know how to store bitmaps in-memory. (24bpp/32bpp, compressed/raw, etc) I'm not working with 3D graphics or DirectX / OpenGL rendering and so I don't need to use graphics card compatible bitmap formats. My questions: What is the "usual" or "normal" way to store bitmaps in memory? (in C++ engines/projects?) How to store bitmaps for high-performance algorithms, such that read/write times are the fastest? (fixed array? with/without padding? 24-bpp or 32-bpp?) How to store bitmaps for applications handling a lot of bitmap data, to minimize memory usage? (JPEG? or a faster [de]compression algorithm?) Some possible methods: Use a fixed packed 24-bpp or 32-bpp int[] array and simply access pixels using pointer access, all pixels are allocated in one continuous memory chunk (could be 1-10 MB) Use a form of "sparse" data storage so each line of the bitmap is allocated separately, reusing more memory and requiring smaller contiguous memory segments Store bitmaps in its compressed form (PNG, JPG, GIF, etc) and unpack only when its needed, reducing the amount of memory used. Delete the unpacked data if its not used for 10 secs.

    Read the article

  • NINTENDO, EDCON and ALLEGIS GROUP @ Oracle Open World 2012 Conference Session (CON9418): The Business Case for Oracle Exalogic: A Customer Perspective

    - by Sanjeev Sharma
     Are you looking to deliver breakthrough performance for packaged and custom  applications? For many front-office applications such as Oracle WebCenter Sites, Oracle Transportation Management, and Oracle’s ATG and Siebel product families,  improved  performance leads directly to greater revenue or cost savings from the business - a  compelling  proposition. For back-office applications, improved performance has tangible benefits  in terms of  footprint reductions. For all applications, Oracle Exalogic and Oracle Exadata provide an engineered solution that provides shorter time to value and lower operational costs.  Edcon is a leading clothing, footwear and textiles (CFT) retailing group in southern Africa trading through a range of retail formats. The Company has grown from opening it's first store in 1929, to ten retail brands trading in over 1000 stores in South Africa, Botswana, Namibia, Swaziland and Lesotho. Edcon's retail business has, through recent acquisitions, added top stationery and houseware brands as well as general merchandise to its CFT portfolio. Edcon was looking to consolidate their existing middleware components (Weblogic and Oracle SOA) and retail applications (Retek, Siebel and E-Business Suite) on a common platform and turned to Oracle Exalogic. With Oracle Exalogic, Edcon is able to derive significant HW CAPEX savings, improve response-time of core business applications and mitigate operating risk. Hear senior business leaders from Nintendo, Edcon and Allegis Group discuss how the business value of  leveraging Oracle Exalogic at the following conference session at Oracle Open World 2012: Session:  CON9418 - The Business Case for Oracle Exalogic: A Customer PerspectiveDate: Monday, 1 Oct, 2012Time: 1:45 pm - 2:45 pm (PST)Venue: Moscone South (306)

    Read the article

  • External display resolution issue

    - by Steven
    I'm new to Ubuntu. I've 'dabbled' in the past but Windows was annoying me too much so I finally made the change. One thing I use my laptop for is outputting the screen onto an external monitor. In Windows this usually works fine (but when there is an issue, it's a pain to resolve and has crashed my computer on many occasions). In Ubuntu it's a very simple case of "plug and play" because it displays the content immediately with no major problems. However, I can only alter the resolution of the external display to 4:3 formats, whereas the external monitor is 16:9, so the picture is stretched. In Windows there's loads of resolutions to choose from. If I disable the laptop's screen, it still doesn't allow me to display the correct resolution externally (on VGA). Further to that, if I close the laptop it keeps the external screen as it is, but when I reopen the laptop, the laptop screen doesn't turn back on until I restart. If it's a video I am watching, I can use VLC to alter the aspect ratio so the video looks fine, but it's quite an annoying work around. I used terminal to find my graphics controller, which is "Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller [8086:2a42] (rev 07)" (not great, I know... the laptop was a free gift with my mobile!) Any idea of how I can get the resolution on the external monitor to be correct?

    Read the article

  • Fusion Applications Announces the Online Feature Query Tool

    - by Richard Lefebvre
    Fusion Applications Development is pleased to announce the availability of a new, online tool for viewing Fusion Features. Oracle Product Features allows you to view new features in Fusion across multiple releases, families and products. You can view online or download the data in a variety of formats, including pdf and xls. This easy-to-use tool covers the same content and therefore replaces the pdf versions of the What's New documents. You can access Oracle Product Features from the Fusion Learning Center under Featured Assets > Product Features Query Tool. It can also be found under Release Readiness > Release Overview. Oracle Product Features will be available to customers and Partners from MyOracleSupport, oracle.com and the Partner Network Fusion Learning Center in the near future.  Oracle Product Features provides you with a high level of flexibility allowing you to only see the content you want, whether it is for a single Fusion product family, such as Human Capital Management (HCM), across several releases, or to view the entire listing of new features.  Content currently incorporates all new features introduced in Releases 3, 4 and 5. Release 6 content will be added in the near future.

    Read the article

  • Using mod_speling with multi-level htaccess and rewriterules

    - by michaelcgorman
    We recently switched formats for managing our 301s. For the most part, everything went well, but it seems to have stopped mod_speling from working properly. Here's what we changed: old /var/www/html/.htaccess: RewriteEngine on RewriteBase / # Change SHTML to HTML RewriteRule ^(.*)\.shtml$ $1.html [R=permanent,L] # Change PCF to HTML ('cause, you know, we probably have CMS users like that...) RewriteRule ^(.*)\.pcf$ $1.html [R=permanent,L] # Force WWW subdomain for all requests RewriteCond %{HTTP_HOST} !^www.example.edu$ [NC] RewriteRule ^(.*)$ http://www.example.edu/$1 [R,L] # User accounts are on sun.example.edu RedirectMatch ^/~(.*)$ http://sun.example.edu/~$1 # Remove index.html at the end of URLs RewriteCond %{REQUEST_URI} ^(.*/)index\.html$ [NC] RewriteRule . %1 [R=301,NE,L] Redirect 301 /academics/calendar2012-13.html http://www.example.edu/academics/calendar.html Redirect 301 /academics/departments/ http://www.example.edu/majors/ Redirect 301 /academics/Pre-Medical.pdf http://www.example.edu/academics/Pre-Medicine.pdf Redirect 301 ... new /var/www/html/.htaccess: RewriteEngine on RewriteBase / # Change SHTML to HTML RewriteRule ^(.*)\.shtml$ $1.html [R=permanent,L] # Change PCF to HTML ('cause, you know, we probably have CMS users like that...) RewriteRule ^(.*)\.pcf$ $1.html [R=permanent,L] # Force WWW subdomain for all requests RewriteCond %{HTTP_HOST} !^www.example.edu$ [NC] RewriteRule ^(.*)$ http://www.example.edu/$1 [R,L] # User accounts are on sun.example.edu RedirectMatch ^/~(.*)$ http://sun.example.edu/~$1 # Remove index.html at the end of URLs RewriteCond %{REQUEST_URI} ^(.*/)index\.html$ [NC] RewriteRule . %1 [R=301,NE,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*) 404/$1 And then we added a new file at /var/www/html/404/.htaccess: RewriteEngine on RewriteBase /404 RewriteRule ^academics/calendar2012-13.html$ /academics/calendar.html [R=302,L] RewriteRule ^academics/departments/$ /majors/ [R=301,L] RewriteRule ^academics/Pre-Medical.pdf$ /academics/Pre-Medicine.pdf[R=301,L] RewriteRule ... I do have (Webmin-based) access to the httpd.conf (though we don't want to store all our 301s there, if possible). We're running Apache 2.2.15 on RHEL 6 on a server in our own data center. Like I said, the only problem we're seeing is that mod_speling isn't doing its magic anymore. The new format has so many advantages over the old that we really don't want to go back, but mod_speling is so nice to have that we'd also really like it to work if possible. Any ideas for how we might be able to fix mod_speling?

    Read the article

  • Vector-based fonts vs. bitmap fonts in (2d) games?

    - by jmp97
    I know that many games are using bitmap fonts. Which are the advantages for vector-based font rendering / manipulation when compared to bitmap fonts and in which scenarios would they matter the most? Prefer a focus on 2d games when answering this question. If relevant, please include examples for games using either approach. Some factors you might consider: amount of text used in the game scaling of text overlaying glyphs and anti-aliasing general rendering quality font colors and styling user interface requirements localisation / unicode text wrapping and formatting cross-platform deployment 2d vs 3d Background: I am developing a simple falling blocks game in 2d, targeted for pc. I would like to add text labels for level, score, and menu buttons. I am using SFML which uses FreeType internally, so vector-based features are easily available for my project. In my view, font sizes in simple games often don't vary, and bitmap fonts should be easier for cross-platform concerns (font-formats and font rendering quality). But I am unsure if I am missing some important points here, especially since I want to polish the looks of the final game.

    Read the article

  • How can I read a portion of one Minecraft world file and write it into another?

    - by RapierMother
    I'm looking to read block data from one Minecraft world and write the data into certain places in another. I have a Minecraft world, let's say "TemplateWorld", and a 2D list of Point objects. I'm developing an application that should use the x and y values of these Points as x and z reference coordinates from which to read constant-sized areas of blocks from the TemplateWorld. It should then write these blocks into another Minecraft world at constant y coordinates, with x & z coordinates determined based on each Point's index in the 2D list. The issue is that, while I've found a decent amount of information online regarding Minecraft world formats, I haven't found what I really need: more of a breakdown by hex address of where/what everything is. For example, I could have the TemplateWorld actually be a .schematic file rather than a world; I just need to be able to read the bytes of the file, know that the actual block data starts always at a certain address (or after a certain instance of FF, etc.), and how it's stored. Once I know that, it's easy as pie to just read the bytes and store them.

    Read the article

  • AdSense (reports) and custom channels

    - by RobbertT
    Please help me to further understand custom channels. As Google says it is a way to map your ads, but I still have a few questions: Is it correct that a single custom channel per 1 ad is not very useful, since you can specify Ad blocks in the AdSense reports? I have multiple Ads in multiple custom channels. After this I created 1 custom channel and added all the ads to it. I made this channel targetable, so people can target through this channel on all ads at once. Is this a good way to do it? In other words, is it possible to have ads in multiple custom channels (without targeting, just for analyzing) and then create 1 custom channel with targeting that embraces all the (desired) ads? Why is it not possible for me to analyze custom channels (or ad blocks & formats) per site in the Adsense (reports). Or am I doing something wrong? If not, I have to create different custom channels per site to see how certain ads are doing on a site level?

    Read the article

  • How far should one take e-mail address validation?

    - by Mike Tomasello
    I'm wondering how far people should take the validation of e-mail address. My field is primarily web-development, but this applies anywhere. I've seen a few approaches: simply checking if there is an "@" present, which is dead simply but of course not that reliable. a more complex regex test for standard e-mail formats a full regex against RFC 2822 - the problem with this is that often an e-mail address might be valid but it is probably not what the user meant DNS validation SMTP validation As many people might know (but many don't), e-mail addresses can have a lot of strange variation that most people don't usually consider (see RFC 2822 3.4.1), but you have to think about the goals of your validation: are you simply trying to ensure that an e-mail address can be sent to an address, or that it is what the user probably meant to put in (which is unlikely in a lot of the more obscure cases of otherwise 'valid' addresses). An option I've considered is simply giving a warning with a more esoteric address but still allowing the request to go through, but this does add more complexity to a form and most users are likely to be confused. While DNS validation / SMTP validation seem like no-brainers, I foresee problems where the DNS server/SMTP server is temporarily down and a user is unable to register somewhere, or the user's SMTP server doesn't support the required features. How might some experienced developers out here handle this? Are there any other approaches than the ones I've listed? Edit: I completely forgot the most obvious of all, sending a confirmation e-mail! Thanks to answerers for pointing that one out. Yes, this one is pretty foolproof, but it does require extra hassle on the part of everyone involved. The user has to fetch some e-mail, and the developer needs to remember user data before they're even confirmed as valid.

    Read the article

  • More NASM with GVim

    - by MarkPearl
    Today I am bashing around with nasm again… some useful things I found… Set the current working directory of gvim to the current file path I have found setting the current working directory of gvim to the file location is very useful, especially if you are wanting to use commands in gvim to run your compiled code. It can be done by typing in the following in the command mode in gvim… cd %:p:h Once you have set it, you can use the ! to run commands you would normally run in the dos shell.. e.g. !dir Compiling code to make an executable There are three thing you need to specify to compile a basic file in name, they are… The output file format The output file name The source file name An example of this would be the following (where you have a file called temp.asm which is the source file) nasm –f bin temp.asm –o temp.com Output file format The –f specifies the output file format (in this case a binary file). To get a list of the available output file formats you can type nasm –hf (for my installation bin is the default, in which case I can omit it) Output file name This is just the name you want the compiled file to be called. For windows machines I specify .com as my default format.

    Read the article

  • How to reset the language of the package descriptions

    - by xubuntix
    I have had German as my main language about a year ago. Later I changed it to English. Most parts of the system accepted the change. The notable exceptions are the package descriptions, which remain in German for some packages. You can see in the image (apt-cache and software-center), that while some descriptions are in English, some have remained in German. So the question is: how do I reset this? I guess that there is somewhere a description cache that needs to be told that it should update all descriptions? EDIT: As asked: the output of some language related commands: $ cat /etc/default/locale LANG="en_US.UTF-8" $ apt-config dump | grep Lang Acquire::Languages ""; Acquire::Languages:: "de_DE"; Acquire::Languages:: "de"; Acquire::Languages:: "en"; Acquire::Languages:: "none"; $ locale LANG=de_DE.UTF-8 LANGUAGE=en LC_CTYPE="de_DE.UTF-8" LC_NUMERIC="de_DE.UTF-8" LC_TIME="de_DE.UTF-8" LC_COLLATE="de_DE.UTF-8" LC_MONETARY="de_DE.UTF-8" LC_MESSAGES="de_DE.UTF-8" LC_PAPER="de_DE.UTF-8" LC_NAME="de_DE.UTF-8" LC_ADDRESS="de_DE.UTF-8" LC_TELEPHONE="de_DE.UTF-8" LC_MEASUREMENT="de_DE.UTF-8" LC_IDENTIFICATION="de_DE.UTF-8" LC_ALL= As a note: I'm not sure what each entry means, but some of the de_DE.UTF-8 are probably ok, since I do want paper-sizes, monetary, time, etc. in standard German formats.

    Read the article

  • What are the advantages to use vector-based fonts over bitmap fonts in (2d) games?

    - by jmp97
    I know that many games are using bitmap fonts. Which are the advantages for vector-based font rendering / manipulation when compared to bitmap fonts and in which scenarios would they matter the most? Prefer a focus on 2d games when answering this question. If relevant, please include examples for games using either approach. Some factors you might consider: amount of text used in the game scaling of text overlaying glyphs and anti-aliasing general rendering quality font colors and styling user interface requirements localisation / unicode text wrapping and formatting cross-platform deployment 2d vs 3d Background: I am developing a simple falling blocks game in 2d, targeted for pc. I would like to add text labels for level, score, and menu buttons. I am using SFML which uses FreeType internally, so vector-based features are easily available for my project. In my view, font sizes in simple games often don't vary, and bitmap fonts should be easier for cross-platform concerns (font-formats and font rendering quality). But I am unsure if I am missing some important points here, especially since I want to polish the looks of the final game.

    Read the article

  • Where did I write that code ?

    - by Tarun Arora
    Every been in that situation when you desperately need to find that code you checked into TFS a few days back but just can’t remember what team project, what branch, what solution or what file you checked it into. Well you are not alone… Only if there was a way to efficiently search for files and text with in TFS. It is possible… You need to get your hands on Agent Ransack… This is a stand a lone tool that does not integrate with TFS but gives you the capability to search through text files effortlessly. Agent Ransack searches through files, text or otherwise, fast and efficiently. When searching the contents of files for code, or other text, Agent Ransack displays the text found so you can quickly browse the results without having to separately open each file! Agent Ransack is free for both Personal and Commercial use and can be Download from here.   Set the Look In directory of the Ransack search tool to your TFS Workspace and type the text you would like to scan for, you can limit the search by narrowing down the filter path or the name of the file. Found text is shown with highlighted keywords so you don't need to waste time opening each file looking for the right information.         The regular expression wizard helps you build regular expressions for complex pattern matching searches         You even have the option of searching by modified, created or last accessed date          Export your results to a file for importing into other apps or for sharing with others          Agent Ransack also provides search support for popular Office formats including Office 2007 and OpenOffice Next time you are looking for that illusive line of code whether it is a method declaration, function call, or algorithm that you checked into TFS, use Agent Ransack for a quick search.

    Read the article

  • HDA NVidia (GT520) - Sound Issue

    - by Oliver Lucas
    I have an GT520 graphics card and I am trying to get the sound working with my XBMC setup and I'm having trouble. Things I have completed: aplay -l List of PLAYBACK Hardware Devices card 0: NVidia [HDA NVidia], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 then lspci 01:00.1 Audio device: nVidia Corporation HDMI Audio stub (rev a1) and alsamixer which is set to unmuted Everything looks well, so ran: aplay -D hw:0,3 /home/ollie/Music/alex.mp3 Playing raw data '/home/ollie/Music/alex.mp3' : Unsigned 8 bit, Rate 8000 Hz, Mono aplay: set_params:1059: Sample format non available Available formats: - S16_LE - S32_LE with no luck.. then speaker-test Playback device is default Stream parameters are 48000Hz, S16_LE, 1 channels Using 16 octaves of pink noise Playback open error: -2,No such file or directory also tried running through ftp://download.nvidia.com/XFree86/gpu-hdmi-audio-document/gpu-hdmi-audio.html#upgrading_alsa_driver and http://wiki.xbmc.org/index.php?title=HOW-TO:Setup_audio_over_HDMI_on_nVidia_GeForce/nForce_controller plus 20 other websites with selective "fixes" etc.. but no luck _< I am a complete beginner with Ubuntu so this is a really steep learning curve for me, not sure I'm learning much though as its all just headaches atm! Thanks for any help Ollie

    Read the article

  • OpenXML error “file is corrupt and cannot be opened.”

    - by nmgomes
    From time to time I ear some people saying their new web application supports data export to Excel format. So far so good … but they don’t tell the all story … in fact almost all the times what is happening is they are exporting data to a Comma-Separated file or simply exporting GridView rendered HTML to an xls file. Ok … it works but it’s not something I would be proud of. So … yesterday I decided to take a look at the Office Open XML File Formats Specification (Microsoft Office 2007+ format) based on well-known technologies: ZIP and XML. I start by installing Open XML SDK 2.0 for Microsoft Office and playing with some samples. Then I decided to try it on a more complex web application and the “file is corrupt and cannot be opened.” message start happening. Google show us that many people suffer from the same and it seems there are many reasons that can trigger this message. Some are related to the process itself, others with encodings or even styling. Well, none solved my problem and I had to dig … well not that much, I simply change the output file extension to zip and extract the zip content. Then I did the same to the output file from my first sample, compare both zip contents with SourceGear DiffMerge and found that my problem was Culture related. Yes, my complex application sets the Thread.CurrentThread.CurrentCulture  to a non-English culture. For sample purposes I was simply using the ToString method to convert numbers and dates to a string representation but forgot that XML is culture invariant and thus using a decimal separator other than “.” will result in a deserialization problem. I solve the “file is corrupt and cannot be opened.” by using Convert.ToString(object, CultureInfo.InvariantCulture) method instead of the ToString method. Hope this can help someone.

    Read the article

  • Is there a formula for this?

    - by Gortron
    TL/DR: Any way to work out if known numbers between a known start and ending figure should be positive or negative numbers? I am developing an application in PHP which can import and read PDFs. The PDFs are financial ones such as bank statements with records of transactions in and out of a bank account. I only have PDFs to work with, no other formats such as CSV unfortunately. I convert the PDF to HTML using pdftohtml and start parsing the data, the intended end result is an array of transactions. So far I have it working smoothly collecting dates, descriptions and balance. Converting the XML instead doesn't help. There are other pieces of transcriptional data such as debit or credit amounts. In the PDF, the credit amount is in one column and the debit amount is in another column so it is quite clear in the PDF. However, when converted to HTML, the formatting is lost and therefor I don't know if the amount was a credit or debit amount. So, my question is, given a starting balance and an ending balance and several known figures in between, is it possible for a programme to work out if those known figures in between are credit or debit amounts? I imagine there could potentially be several combinations of those known values to reach the ending balance so I'd like to apply a formula to return the correct credit/debit sequence only if its the only possible solution. If there are several ways of adding/subtracting the known values to reach the end balance, I can ask the user to look at it manually but I'd like to keep this to a minimum if possible. Possible to do, do you think? Thank you in advance for any help.

    Read the article

  • Where can I find Ad Networks with single liner Ads?

    - by MaX
    I've developed a site that serves pure HTML Weather widgets (and they are great looking too). Just after two months I am generating 1.25K hits monthly (Google Analytics). Now I want to generate some money out of it. You can check my service out on Here . I am looking for affiliate or an Ads service that can I can hookup within but there is a twist in story. I want single liner text Ad in a particular location otherwise widgets will look rubbish, see this snapshot: Plus I have some unique places in my site to place some banner ads as well, Here are existing set of services that I've already tried: Ad Sense, doesn't allow or have such formats of methods. Peefly provides you with straight links works best but I recorded some clicks (Through Google Events) and they didn't show me any, plus it introduces overhead of manually going and choosing your links. BidVertise totally rubbish opens popups and what not, makes site look like spam I am new to this ad stuff so have a limited knowledge. Suggestions please? I have one more place in Forecast but I want to start simple. P.S. I also have a MetroUI like widget coming in the pipeline but its not ready yet.

    Read the article

  • 16-bit PNGs in Slick2D

    - by Neglected
    I'm working on a project and I'm using some 3rd party sprites just to get it off the ground; recently I've come into a hitch. Slick2D doesn't seem to want to load my images. That is, it will warn me that images are the wrong bit-depth. All the images are in 16-bit PNG form (PNG is required for transparency). Is there any way I can disable the warning (being the bad guy programmer (the console print for each individual load REALLY SLOWS DOWN the image)) or is there another solution? I was thinking about converting all images (using imagemagick) to .gif (with an alpha channel). Would there be any loss in quality between formats? EDIT: I tried using imagemagick but some of the sprites use pure black so I can't do that without wrecking the image. EDIT2: using "identify" on any of the images show them as being 8-bit.. but Slick2D won't load them. What the hell? D: EDIT3: Issue solved (ish). If you are googling this then just disable the java png loader from slick by sticking this somewhere in your code (like the main method): System.setProperty("org.newdawn.slick.pngloader", "false");

    Read the article

  • how does HDR work?

    - by dotminic
    I'm trying to understand what HDR is and how it works. I understand the basic concepts and have an slight idea of how it is implemented with D3D/hlsl. However it's still pretty foggy. Say I'm rendering a sphere with a texture of the earth and a small point list of vertices to act as stars, how would I render this in HDR ? Here are a few things I'm confused about: I'm guessing, I can't use just any basic image format for the texture as the values would be limited to [0, 255] and clamped to [0, 1] in a shader. Same goes for the back buffer, I take it the format needs to be a float point format ? What are the other steps involved ? Surely there has to be more than just using floating point formats to render to a render target and then apply some bloom as a post process ? (considering the output will be 8bpp anyway) Basically, what are the steps for HDR ? How does it work ? I can't seem to find any good papers / articles that describe the process, other than this one, but it seems to skim over the basics a little, so it's confusing.

    Read the article

  • Reason why Windows can't be installed on Ubuntu , Any Help though?

    - by Terzuz
    Well I got my Windows Vista .iso file , I put it on my USB Drive , Extracted , Then I restarted and went to "Boot Device Options" Clicked "Generic USB" (What I labeled when I formatted) Then it stood on a black screen for 15 minutes , Then just went to Ubuntu , Now I found out exactly what happened Why Windows WONT work on Ubuntu is a simple reason Not if you save it in NFTS Format or MSDOS (Both Windows Formats) , Not if you use anything special , This even applies to the Ubuntu downloader , One specific reason that I just noticed , Since WINE (Windows Application Loader) can't run while you click the USB , You can't open the Setup.exe file needed to install the Operating System , That means you can not install it at all if it is a .exe installer file , Then what can we do? Only other way is to make a partition with GParted , Put a partition , then install WINE , Then run your Windows Setup.exe file , Then Wine will load it up! Why not to open Setup.exe before you make a partition - Simple , It WONT WORK! I have tried , It says you need __ Amount of space and can be put on any partition , Yeah , Any partition NOT BEING USED Here is where I need your help! - I want to make a partition but they all show as locked when I open GParted , So How am I supposed to open them up? Get a hammer and tell HP my laptop didn't want to open its partitions? Nope , I need some help on how to unlock the partitions or what to do , Thank you in advance PS- If I could , I would put a bounty to this but I do not have enough reputation!

    Read the article

  • Using an alternate JSON Serializer in ASP.NET Web API

    - by Rick Strahl
    The new ASP.NET Web API that Microsoft released alongside MVC 4.0 Beta last week is a great framework for building REST and AJAX APIs. I've been working with it for quite a while now and I really like the way it works and the complete set of features it provides 'in the box'. It's about time that Microsoft gets a decent API for building generic HTTP endpoints into the framework. DataContractJsonSerializer sucks As nice as Web API's overall design is one thing still sucks: The built-in JSON Serialization uses the DataContractJsonSerializer which is just too limiting for many scenarios. The biggest issues I have with it are: No support for untyped values (object, dynamic, Anonymous Types) MS AJAX style Date Formatting Ugly serialization formats for types like Dictionaries To me the most serious issue is dealing with serialization of untyped objects. I have number of applications with AJAX front ends that dynamically reformat data from business objects to fit a specific message format that certain UI components require. The most common scenario I have there are IEnumerable query results from a database with fields from the result set rearranged to fit the sometimes unconventional formats required for the UI components (like jqGrid for example). Creating custom types to fit these messages seems like overkill and projections using Linq makes this much easier to code up. Alas DataContractJsonSerializer doesn't support it. Neither does DataContractSerializer for XML output for that matter. What this means is that you can't do stuff like this in Web API out of the box:public object GetAnonymousType() { return new { name = "Rick", company = "West Wind", entered= DateTime.Now }; } Basically anything that doesn't have an explicit type DataContractJsonSerializer will not let you return. FWIW, the same is true for XmlSerializer which also doesn't work with non-typed values for serialization. The example above is obviously contrived with a hardcoded object graph, but it's not uncommon to get dynamic values returned from queries that have anonymous types for their result projections. Apparently there's a good possibility that Microsoft will ship Json.NET as part of Web API RTM release.  Scott Hanselman confirmed this as a footnote in his JSON Dates post a few days ago. I've heard several other people from Microsoft confirm that Json.NET will be included and be the default JSON serializer, but no details yet in what capacity it will show up. Let's hope it ends up as the default in the box. Meanwhile this post will show you how you can use it today with the beta and get JSON that matches what you should see in the RTM version. What about JsonValue? To be fair Web API DOES include a new JsonValue/JsonObject/JsonArray type that allow you to address some of these scenarios. JsonValue is a new type in the System.Json assembly that can be used to build up an object graph based on a dictionary. It's actually a really cool implementation of a dynamic type that allows you to create an object graph and spit it out to JSON without having to create .NET type first. JsonValue can also receive a JSON string and parse it without having to actually load it into a .NET type (which is something that's been missing in the core framework). This is really useful if you get a JSON result from an arbitrary service and you don't want to explicitly create a mapping type for the data returned. For serialization you can create an object structure on the fly and pass it back as part of an Web API action method like this:public JsonValue GetJsonValue() { dynamic json = new JsonObject(); json.name = "Rick"; json.company = "West Wind"; json.entered = DateTime.Now; dynamic address = new JsonObject(); address.street = "32 Kaiea"; address.zip = "96779"; json.address = address; dynamic phones = new JsonArray(); json.phoneNumbers = phones; dynamic phone = new JsonObject(); phone.type = "Home"; phone.number = "808 123-1233"; phones.Add(phone); phone = new JsonObject(); phone.type = "Home"; phone.number = "808 123-1233"; phones.Add(phone); //var jsonString = json.ToString(); return json; } which produces the following output (formatted here for easier reading):{ name: "rick", company: "West Wind", entered: "2012-03-08T15:33:19.673-10:00", address: { street: "32 Kaiea", zip: "96779" }, phoneNumbers: [ { type: "Home", number: "808 123-1233" }, { type: "Mobile", number: "808 123-1234" }] } If you need to build a simple JSON type on the fly these types work great. But if you have an existing type - or worse a query result/list that's already formatted JsonValue et al. become a pain to work with. As far as I can see there's no way to just throw an object instance at JsonValue and have it convert into JsonValue dictionary. It's a manual process. Using alternate Serializers in Web API So, currently the default serializer in WebAPI is DataContractJsonSeriaizer and I don't like it. You may not either, but luckily you can swap the serializer fairly easily. If you'd rather use the JavaScriptSerializer built into System.Web.Extensions or Json.NET today, it's not too difficult to create a custom MediaTypeFormatter that uses these serializers and can replace or partially replace the native serializer. Here's a MediaTypeFormatter implementation using the ASP.NET JavaScriptSerializer:using System; using System.Net.Http.Formatting; using System.Threading.Tasks; using System.Web.Script.Serialization; using System.Json; using System.IO; namespace Westwind.Web.WebApi { public class JavaScriptSerializerFormatter : MediaTypeFormatter { public JavaScriptSerializerFormatter() { SupportedMediaTypes.Add(new System.Net.Http.Headers.MediaTypeHeaderValue("application/json")); } protected override bool CanWriteType(Type type) { // don't serialize JsonValue structure use default for that if (type == typeof(JsonValue) || type == typeof(JsonObject) || type== typeof(JsonArray) ) return false; return true; } protected override bool CanReadType(Type type) { if (type == typeof(IKeyValueModel)) return false; return true; } protected override System.Threading.Tasks.Taskobject OnReadFromStreamAsync(Type type, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext) { var task = Taskobject.Factory.StartNew(() = { var ser = new JavaScriptSerializer(); string json; using (var sr = new StreamReader(stream)) { json = sr.ReadToEnd(); sr.Close(); } object val = ser.Deserialize(json,type); return val; }); return task; } protected override System.Threading.Tasks.Task OnWriteToStreamAsync(Type type, object value, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext, System.Net.TransportContext transportContext) { var task = Task.Factory.StartNew( () = { var ser = new JavaScriptSerializer(); var json = ser.Serialize(value); byte[] buf = System.Text.Encoding.Default.GetBytes(json); stream.Write(buf,0,buf.Length); stream.Flush(); }); return task; } } } Formatter implementation is pretty simple: You override 4 methods to tell which types you can handle and then handle the input or output streams to create/parse the JSON data. Note that when creating output you want to take care to still allow JsonValue/JsonObject/JsonArray types to be handled by the default serializer so those objects serialize properly - if you let either JavaScriptSerializer or JSON.NET handle them they'd try to render the dictionaries which is very undesirable. If you'd rather use Json.NET here's the JSON.NET version of the formatter:// this code requires a reference to JSON.NET in your project #if true using System; using System.Net.Http.Formatting; using System.Threading.Tasks; using System.Web.Script.Serialization; using System.Json; using Newtonsoft.Json; using System.IO; using Newtonsoft.Json.Converters; namespace Westwind.Web.WebApi { public class JsonNetFormatter : MediaTypeFormatter { public JsonNetFormatter() { SupportedMediaTypes.Add(new System.Net.Http.Headers.MediaTypeHeaderValue("application/json")); } protected override bool CanWriteType(Type type) { // don't serialize JsonValue structure use default for that if (type == typeof(JsonValue) || type == typeof(JsonObject) || type == typeof(JsonArray)) return false; return true; } protected override bool CanReadType(Type type) { if (type == typeof(IKeyValueModel)) return false; return true; } protected override System.Threading.Tasks.Taskobject OnReadFromStreamAsync(Type type, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext) { var task = Taskobject.Factory.StartNew(() = { var settings = new JsonSerializerSettings() { NullValueHandling = NullValueHandling.Ignore, }; var sr = new StreamReader(stream); var jreader = new JsonTextReader(sr); var ser = new JsonSerializer(); ser.Converters.Add(new IsoDateTimeConverter()); object val = ser.Deserialize(jreader, type); return val; }); return task; } protected override System.Threading.Tasks.Task OnWriteToStreamAsync(Type type, object value, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext, System.Net.TransportContext transportContext) { var task = Task.Factory.StartNew( () = { var settings = new JsonSerializerSettings() { NullValueHandling = NullValueHandling.Ignore, }; string json = JsonConvert.SerializeObject(value, Formatting.Indented, new JsonConverter[1] { new IsoDateTimeConverter() } ); byte[] buf = System.Text.Encoding.Default.GetBytes(json); stream.Write(buf,0,buf.Length); stream.Flush(); }); return task; } } } #endif   One advantage of the Json.NET serializer is that you can specify a few options on how things are formatted and handled. You get null value handling and you can plug in the IsoDateTimeConverter which is nice to product proper ISO dates that I would expect any Json serializer to output these days. Hooking up the Formatters Once you've created the custom formatters you need to enable them for your Web API application. To do this use the GlobalConfiguration.Configuration object and add the formatter to the Formatters collection. Here's what this looks like hooked up from Application_Start in a Web project:protected void Application_Start(object sender, EventArgs e) { // Action based routing (used for RPC calls) RouteTable.Routes.MapHttpRoute( name: "StockApi", routeTemplate: "stocks/{action}/{symbol}", defaults: new { symbol = RouteParameter.Optional, controller = "StockApi" } ); // WebApi Configuration to hook up formatters and message handlers // optional RegisterApis(GlobalConfiguration.Configuration); } public static void RegisterApis(HttpConfiguration config) { // Add JavaScriptSerializer formatter instead - add at top to make default //config.Formatters.Insert(0, new JavaScriptSerializerFormatter()); // Add Json.net formatter - add at the top so it fires first! // This leaves the old one in place so JsonValue/JsonObject/JsonArray still are handled config.Formatters.Insert(0, new JsonNetFormatter()); } One thing to remember here is the GlobalConfiguration object which is Web API's static configuration instance. I think this thing is seriously misnamed given that GlobalConfiguration could stand for anything and so is hard to discover if you don't know what you're looking for. How about WebApiConfiguration or something more descriptive? Anyway, once you know what it is you can use the Formatters collection to insert your custom formatter. Note that I insert my formatter at the top of the list so it takes precedence over the default formatter. I also am not removing the old formatter because I still want JsonValue/JsonObject/JsonArray to be handled by the default serialization mechanism. Since they process in sequence and I exclude processing for these types JsonValue et al. still get properly serialized/deserialized. Summary Currently DataContractJsonSerializer in Web API is a pain, but at least we have the ability with relatively limited effort to replace the MediaTypeFormatter and plug in our own JSON serializer. This is useful for many scenarios - if you have existing client applications that used MVC JsonResult or ASP.NET AJAX results from ASMX AJAX services you can plug in the JavaScript serializer and get exactly the same serializer you used in the past so your results will be the same and don't potentially break clients. JSON serializers do vary a bit in how they serialize some of the more complex types (like Dictionaries and dates for example) and so if you're migrating it might be helpful to ensure your client code doesn't break when you switch to ASP.NET Web API. Going forward it looks like Microsoft is planning on plugging in Json.Net into Web API and make that the default. I think that's an awesome choice since Json.net has been around forever, is fast and easy to use and provides a ton of functionality as part of this great library. I just wish Microsoft would have figured this out sooner instead of now at the last minute integrating with it especially given that Json.Net has a similar set of lower level JSON objects JsonValue/JsonObject etc. which now will end up being duplicated by the native System.Json stuff. It's not like we don't already have enough confusion regarding which JSON serializer to use (JavaScriptSerializer, DataContractJsonSerializer, JsonValue/JsonObject/JsonArray and now Json.net). For years I've been using my own JSON serializer because the built in choices are both limited. However, with an official encorsement of Json.Net I'm happily moving on to use that in my applications. Let's see and hope Microsoft gets this right before ASP.NET Web API goes gold.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api  AJAX  ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Enterprise Process Maps: A Process Picture worth a Million Words

    - by raul.goycoolea
    p { margin-bottom: 0.08in; }h1 { margin-top: 0.33in; margin-bottom: 0in; color: rgb(54, 95, 145); page-break-inside: avoid; }h1.western { font-family: "Cambria",serif; font-size: 14pt; }h1.cjk { font-family: "DejaVu Sans"; font-size: 14pt; }h1.ctl { font-size: 14pt; } Getting Started with Business Transformations A well-known proverb states that "A picture is worth a thousand words." In relation to Business Process Management (BPM), a credible analyst might have a few questions. What if the picture was taken from some particular angle, like directly overhead? What if it was taken from only an inch away or a mile away? What if the photographer did not focus the camera correctly? Does the value of the picture depend on who is looking at it? Enterprise Process Maps are analogous in this sense of relative value. Every BPM project (holistic BPM kick-off, enterprise system implementation, Service-oriented Architecture, business process transformation, corporate performance management, etc.) should be begin with a clear understanding of the business environment, from the biggest picture representations down to the lowest level required or desired for the particular project type, scope and objectives. The Enterprise Process Map serves as an entry point for the process architecture and is defined: the single highest level of process mapping for an organization. It is constructed and evaluated during the Strategy Phase of the Business Process Management Lifecycle. (see Figure 1) Fig. 1: Business Process Management Lifecycle Many organizations view such maps as visual abstractions, constructed for the single purpose of process categorization. This, in turn, results in a lesser focus on the inherent intricacies of the Enterprise Process view, which are explored in the course of this paper. With the main focus of a large scale process documentation effort usually underlying an ERP or other system implementation, it is common for the work to be driven by the desire to "get to the details," and to the type of modeling that will derive near-term tangible results. For instance, a project in American Pharmaceutical Company X is driven by the Director of IT. With 120+ systems in place, and a lack of standardized processes across the United States, he and the VP of IT have decided to embark on a long-term ERP implementation. At the forethought of both are questions, such as: How does my application architecture map to the business? What are each application's functionalities, and where do the business processes utilize them? Where can we retire legacy systems? Well-developed BPM methodologies prescribe numerous model types to capture such information and allow for thorough analysis in these areas. Process to application maps, Event Driven Process Chains, etc. provide this level of detail and facilitate the completion of such project-specific questions. These models and such analysis are appropriately carried out at a relatively low level of process detail. (see figure 2) Fig. 2: The Level Concept, Generic Process HierarchySome of the questions remaining are ones of documentation longevity, the continuation of BPM practice in the organization, process governance and ownership, process transparency and clarity in business process objectives and strategy. The Level Concept in Brief Figure 2 shows a generic, four-level process hierarchy depicting the breakdown of a "Process Area" into progressively more detailed process classifications. The number of levels and the names of these levels are flexible, and can be fit to the standards of the organization's chosen terminology or any other chosen reference model that makes logical sense for both short and long term process description. It is at Level 1 (in this case the Process Area level), that the Enterprise Process Map is created. This map and its contained objects become the foundation for a top-down approach to subsequent mapping, object relationship development, and analysis of the organization's processes and its supporting infrastructure. Additionally, this picture serves as a communication device, at an executive level, describing the design of the business in its service to a customer. It seems, then, imperative that the process development effort, and this map, start off on the right foot. Figuring out just what that right foot is, however, is critical and trend-setting in an evolving organization. Key Considerations Enterprise Process Maps are usually not as living and breathing as other process maps. Just as it would be an extremely difficult task to change the foundation of the Sears Tower or a city plan for the entire city of Chicago, the Enterprise Process view of an organization usually remains unchanged once developed (unless, of course, an organization is at a stage where it is capable of true, high-level process innovation). Regardless, the Enterprise Process map is a key first step, and one that must be taken in a precise way. What makes this groundwork solid depends on not only the materials used to construct it (process areas), but also the layout plan and knowledge base of what will be built (the entire process architecture). It seems reasonable that care and consideration are required to create this critical high level map... but what are the important factors? Does the process modeler need to worry about how many process areas there are? About who is looking at it? Should he only use the color pink because it's his boss' favorite color? Interestingly, and perhaps surprisingly, these are all valid considerations that may just require a bit of structure. Below are Three Key Factors to consider when building an Enterprise Process Map: Company Strategic Focus Process Categorization: Customer is Core End-to-end versus Functional Processes Company Strategic Focus As mentioned above, the Enterprise Process Map is created during the Strategy Phase of the Business Process Management Lifecycle. From Oracle Business Process Management methodology for business transformation, it is apparent that business processes exist for the purpose of achieving the strategic objectives of an organization. In a prescribed, top-down approach to process development, it must be ensured that each process fulfills its objectives, and in an aggregated manner, drives fulfillment of the strategic objectives of the company, whether for particular business segments or in a broader sense. This is a crucial point, as the strategic messages of the company must therefore resound in its process maps, in particular one that spans the processes of the complete business: the Enterprise Process Map. One simple example from Company X is shown below (see figure 3). Fig. 3: Company X Enterprise Process Map In reviewing Company X's Enterprise Process Map, one can immediately begin to understand the general strategic mindset of the organization. It shows that Company X is focused on its customers, defining 10 of its process areas belonging to customer-focused categories. Additionally, the organization views these end-customer-oriented process areas as part of customer-fulfilling value chains, while support process areas do not provide as much contiguous value. However, by including both support and strategic process categorizations, it becomes apparent that all processes are considered vital to the success of the customer-oriented focus processes. Below is an example from Company Y (see figure 4). Fig. 4: Company Y Enterprise Process Map Company Y, although also a customer-oriented company, sends a differently focused message with its depiction of the Enterprise Process Map. Along the top of the map is the company's product tree, overarching the process areas, which when executed deliver the products themselves. This indicates one strategic objective of excellence in product quality. Additionally, the view represents a less linear value chain, with strong overlaps of the various process areas. Marketing and quality management are seen as a key support processes, as they span the process lifecycle. Often, companies may incorporate graphics, logos and symbols representing customers and suppliers, and other objects to truly send the strategic message to the business. Other times, Enterprise Process Maps may show high level of responsibility to organizational units, or the application types that support the process areas. It is possible that hundreds of formats and focuses can be applied to an Enterprise Process Map. What is of vital importance, however, is which formats and focuses are chosen to truly represent the direction of the company, and serve as a driver for focusing the business on the strategic objectives set forth in that right. Process Categorization: Customer is Core In the previous two examples, processes were grouped using differing categories and techniques. Company X showed one support and three customer process categorizations using encompassing chevron objects; Customer Y achieved a less distinct categorization using a gradual color scheme. Either way, and in general, modeling of the process areas becomes even more valuable and easily understood within the context of business categorization, be it strategic or otherwise. But how one categorizes their processes is typically more complex than simply choosing object shapes and colors. Previously, it was stated that the ideal is a prescribed top-down approach to developing processes, to make certain linkages all the way back up to corporate strategy. But what about external influences? What forces push and pull corporate strategy? Industry maturity, product lifecycle, market profitability, competition, etc. can all drive the critical success factors of a particular business segment, or the company as a whole, in addition to previous corporate strategy. This may seem to be turning into a discussion of theory, but that is far from the case. In fact, in years of recent study and evolution of the way businesses operate, cross-industry and across the globe, one invariable has surfaced with such strength to make it undeniable in the game plan of any strategy fit for survival. That constant is the customer. Many of a company's critical success factors, in any business segment, relate to the customer: customer retention, satisfaction, loyalty, etc. Businesses serve customers, and so do a business's processes, mapped or unmapped. The most effective way to categorize processes is in a manner that visualizes convergence to what is core for a company. It is the value chain, beginning with the customer in mind, and ending with the fulfillment of that customer, that becomes the core or the centerpiece of the Enterprise Process Map. (See figure 5) Fig. 5: Company Z Enterprise Process Map Company Z has what may be viewed as several different perspectives or "cuts" baked into their Enterprise Process Map. It has divided its processes into three main categories (top, middle, and bottom) of Management Processes, the Core Value Chain and Supporting Processes. The Core category begins with Corporate Marketing (which contains the activities of beginning to engage customers) and ends with Customer Service Management. Within the value chain, this company has divided into the focus areas of their two primary business lines, Foods and Beverages. Does this mean that areas, such as Strategy, Information Management or Project Management are not as important as those in the Core category? No! In some cases, though, depending on the organization's understanding of high-level BPM concepts, use of category names, such as "Core," "Management" or "Support," can be a touchy subject. What is important to understand, is that no matter the nomenclature chosen, the Core processes are those that drive directly to customer value, Support processes are those which make the Core processes possible to execute, and Management Processes are those which steer and influence the Core. Some common terms for these three basic categorizations are Core, Customer Fulfillment, Customer Relationship Management, Governing, Controlling, Enabling, Support, etc. End-to-end versus Functional Processes Every high and low level of process: function, task, activity, process/work step (whatever an organization calls it), should add value to the flow of business in an organization. Suppose that within the process "Deliver package," there is a documented task titled "Stop for ice cream." It doesn't take a process expert to deduce the room for improvement. Though stopping for ice cream may create gain for the one person performing it, it likely benefits neither the organization nor, more importantly, the customer. In most cases, "Stop for ice cream" wouldn't make it past the first pass of To-Be process development. What would make the cut, however, would be a flow of tasks that, each having their own value add, build up to greater and greater levels of process objective. In this case, those tasks would combine to achieve a status of "package delivered." Figure 3 shows a simple example: Just as the package can only be delivered (outcome of the process) without first being retrieved, loaded, and the travel destination reached (outcomes of the process steps), some higher level of process "Play Practical Joke" (e.g., main process or process area) cannot be completed until a package is delivered. It seems that isolated or functionally separated processes, such as "Deliver Package" (shown in Figure 6), are necessary, but are always part of a bigger value chain. Each of these individual processes must be analyzed within the context of that value chain in order to ensure successful end-to-end process performance. For example, this company's "Create Joke Package" process could be operating flawlessly and efficiently, but if a joke is never developed, it cannot be created, so the end-to-end process breaks. Fig. 6: End to End Process Construction That being recognized, it is clear that processes must be viewed as end-to-end, customer-to-customer, and in the context of company strategy. But as can also be seen from the previous example, these vital end-to-end processes cannot be built without the functionally oriented building blocks. Without one, the other cannot be had, or at least not in a complete and organized fashion. As it turns out, but not discussed in depth here, the process modeling effort, BPM organizational development, and comprehensive coverage cannot be fully realized without a semi-functional, process-oriented approach. Then, an Enterprise Process Map should be concerned with both views, the building blocks, and access points to the business-critical end-to-end processes, which they construct. Without the functional building blocks, all streams of work needed for any business transformation would be lost mess of process disorganization. End-to-end views are essential for utilization in optimization in context, understanding customer impacts, base-lining all project phases and aligning objectives. Including both views on an Enterprise Process Map allows management to understand the functional orientation of the company's processes, while still providing access to end-to-end processes, which are most valuable to them. (See figures 7 and 8). Fig. 7: Simplified Enterprise Process Map with end-to-end Access Point The above examples show two unique ways to achieve a successful Enterprise Process Map. The first example is a simple map that shows a high level set of process areas and a separate section with the end-to-end processes of concern for the organization. This particular map is filtered to show just one vital end-to-end process for a project-specific focus. Fig. 8: Detailed Enterprise Process Map showing connected Functional Processes The second example shows a more complex arrangement and categorization of functional processes (the names of each process area has been removed). The end-to-end perspective is achieved at this level through the connections (interfaces at lower levels) between these functional process areas. An important point to note is that the organization of these two views of the Enterprise Process Map is dependent, in large part, on the orientation of its audience, and the complexity of the landscape at the highest level. If both are not apparent, the Enterprise Process Map is missing an opportunity to serve as a holistic, high-level view. Conclusion In the world of BPM, and specifically regarding Enterprise Process Maps, a picture can be worth as many words as the thought and effort that is put into it. Enterprise Process Maps alone cannot change an organization, but they serve more purposes than initially meet the eye, and therefore must be designed in a way that enables a BPM mindset, business process understanding and business transformation efforts. Every Enterprise Process Map will and should be different when looking across organizations. Its design will be driven by company strategy, a level of customer focus, and functional versus end-to-end orientations. This high-level description of the considerations of the Enterprise Process Maps is not a prescriptive "how to" guide. However, a company attempting to create one may not have the practical BPM experience to truly explore its options or impacts to the coming work of business process transformation. The biggest takeaway is that process modeling, at all levels, is a science and an art, and art is open to interpretation. It is critical that the modeler of the highest level of process mapping be a cognoscente of the message he is delivering and the factors at hand. Without sufficient focus on the design of the Enterprise Process Map, an entire BPM effort may suffer. For additional information please check: Oracle Business Process Management.

    Read the article

  • Laissez les bon temps rouler! (Microsoft BI Conference 2010)

    - by smisner
    "Laissez les bons temps rouler" is a Cajun phrase that I heard frequently when I lived in New Orleans in the mid-1990s. It means "Let the good times roll!" and encapsulates a feeling of happy expectation. As I met with many of my peers and new acquaintances at the Microsoft BI Conference last week, this phrase kept running through my mind as people spoke about their plans in their respective businesses, the benefits and opportunities that the recent releases in the BI stack are providing, and their expectations about the future of the BI stack. Notwithstanding some jabs here and there to point out the platform is neither perfect now nor will be anytime soon (along with admissions that the competitors are also not perfect), and notwithstanding several missteps by the event organizers (which I don't care to enumerate), the overarching mood at the conference was positive. It was a refreshing change from the doom and gloom hovering over several conferences that I attended in 2009. Although many people expect economic hardships to continue over the coming year or so, everyone I know in the BI field is busier than ever and expects to stay busy for quite a while. Self-Service BI Self-service was definitely a theme of the BI conference. In the keynote, Ted Kummert opened with a look back to a fairy tale vision of self-service BI that he told in 2008. At that time, the fairy tale future was a time when "every end user was able to use BI technologies within their job in order to move forward more effectively" and transitioned to the present time in which SQL Server 2008 R2, Office 2010, and SharePoint 2010 are available to deliver managed self-service BI. This set of technologies is presumably poised to address the needs of the 80% of users that Kummert said do not use BI today. He proceeded to outline a series of activities that users ought to be able to do themselves--from simple changes to a report like formatting or an addtional data visualization to integration of an additional data source. The keynote then continued with a series of demonstrations of both current and future technology in support of self-service BI. Some highlights that interested me: PowerPivot, of course, is the flagship product for self-service BI in the Microsoft BI stack. In the TechEd keynote, which was open to the BI conference attendees, Amir Netz (twitter) impressed the audience by demonstrating interactivity with a workbook containing 100 million rows. He upped the ante at the BI keynote with his demonstration of a future-state PowerPivot workbook containing over 2 billion records. It's important to note that this volume of data is being processed by a server engine, and not in the PowerPivot client engine. (Yes, I think it's impressive, but none of my clients are typically wrangling with 2 billion records at a time. Maybe they're thinking too small. This ability to work quickly with large data sets has greater implications for BI solutions than for self-service BI, in my opinion.) Amir also demonstrated KPIs for the future PowerPivot, which appeared to be easier to implement than in any other Microsoft product that supports KPIs, apart from simple KPIs in SharePoint. (My initial reaction is that we have one more place to build KPIs. Great. It's confusing enough. I haven't seen how well those KPIs integrate with other BI tools, which will be important for adoption.) One more PowerPivot feature that Amir showed was a graphical display of the lineage for calculations. (This is hugely practical, especially if you build up calculations incrementally. You can more easily follow the logic from calculation to calculation. Furthermore, if you need to make a change to one calculation, you can assess the impact on other calculations.) Another product demonstration will be available within the next 30 days--Pivot for Reporting Services. If you haven't seen this technology yet, check it out at www.getpivot.com. (It definitely has a wow factor, but I'm skeptical about its practicality. However, I'm looking forward to trying it out with data that I understand.) Michael Tejedor (twitter) demonstrated a feature that I think is really interesting and not emphasized nearly enough--overshadowed by PowerPivot, no doubt. That feature is the Microsoft Business Intelligence Indexing Connector, which enables search of the content of Excel workbooks and Reporting Services reports. (This capability existed in MOSS 2007, but was more cumbersome to implement. The search results in SharePoint 2010 are not only cooler, but more useful by describing whether the content is found in a table or a chart, for example.) This may yet be the dawning of the age of self-service BI - a phrase I've heard repeated from time to time over the last decade - but I think BI professionals are likely to stay busy for a long while, and need not start looking for a new line of work. Kummert repeatedly referenced strategic BI solutions in contrast to self-service BI to emphasize that self-service BI is not a replacement for the services that BI professionals provide. After all, self-service BI does not appear magically on user desktops (or whatever device they want to use). A supporting infrastructure is necessary, and grows in complexity in proportion to the need to simplify BI for users. It's one thing to hear the party line touted by Microsoft employees at the BI keynote, but it's another to hear from the people who are responsible for implementing and supporting it within an organization. Rob Collie (blog | twitter), Kasper de Jonge (blog | twitter), Vidas Matelis (site | twitter), and I were invited to join Andrew Brust (blog | twitter) as he led a Birds of a Feather session at TechEd entitled "PowerPivot: Is It the BI Deal-Changer for Developers and IT Pros?" I would single out the prevailing concern in this session as the issue of control. On one side of this issue were those who were concerned that they would lose control once PowerPivot is implemented. On the other side were those who believed that data should be freely accessible to users in PowerPivot, and even acknowledgment that users would get the data they want even if it meant they would have to manually enter into a workbook to have it ready for analysis. For another viewpoint on how PowerPivot played out at the conference, see Rob Collie's observations. Collaborative BI I have been intrigued by the notion of collaborative BI for a very long time. Before I discovered BI, I was a Lotus Notes developer and later a manager of developers, working in a software company that enabled collaboration in the legal industry. Not only did I help create collaborative systems for our clients, I created a complete project management from the ground up to collaboratively manage our custom development work. In that case, collaboration involved my team, my client contacts, and me. I was also able to produce my own BI from that system as well, but didn't know that's what I was doing at the time. Only in recent years has SharePoint begun to catch up with the capabilities that I had with Lotus Notes more than a decade ago. Eventually, I had the opportunity at that job to formally investigate BI as another product offering for our software, and the rest - as they say - is history. I built my first data warehouse with Scott Cameron (who has also ventured into the authoring world by writing Analysis Services 2008 Step by Step and was at the BI Conference last week where I got to reminisce with him for a bit) and that began a career that I never imagined at the time. Fast forward to 2010, and I'm still lauding the virtues of collaborative BI, if only the tools will catch up to my vision! Thus, I was anxious to see what Donald Farmer (blog | twitter) and Rita Sallam of Gartner had to say on the subject in their session "Collaborative Decision Making." As I suspected, the tools aren't quite there yet, but the vendors are moving in the right direction. One thing I liked about this session was a non-Microsoft perspective of the state of the industry with regard to collaborative BI. In addition, this session included a better demonstration of SharePoint collaborative BI capabilities than appeared in the BI keynote. Check out the video in the link to the session to see the demonstration. One of the use cases that was demonstrated was linking from information to a person, because, as Donald put it, "People don't trust data, they trust people." The Microsoft BI Stack in General A question I hear all the time from students when I'm teaching is how to know what tools to use when there is overlap between products in the BI stack. I've never taken the time to codify my thoughts on the subject, but saw that my friend Dan Bulos provided good insight on this topic from a variety of perspectives in his session, "So Many BI Tools, So Little Time." I thought one of his best points was that ideally you should be able to design in your tool of choice, and then deploy to your tool of choice. Unfortunately, the ideal is yet to become real across the platform. The closest we come is with the RDL in Reporting Services which can be produced from two different tools (Report Builder or Business Intelligence Development Studio's Report Designer), manually, or by a third-party or custom application. I have touted the idea for years (and publicly said so about 5 years ago) that eventually more products would be RDL producers or consumers, but we aren't there yet. Maybe in another 5 years. Another interesting session that covered the BI stack against a backdrop of competitive products was delivered by Andrew Brust. Andrew did a marvelous job of consolidating a lot of information in a way that clearly communicated how various vendors' offerings compared to the Microsoft BI stack. He also made a particularly compelling argument about how the existence of an ecosystem around the Microsoft BI stack provided innovation and opportunities lacking for other vendors. Check out his presentation, "How Does the Microsoft BI Stack...Stack Up?" Expo Hall I had planned to spend more time in the Expo Hall to see who was doing new things with the BI stack, but didn't manage to get very far. Each time I set out on an exploratory mission, I got caught up in some fascinating conversations with one or more of my peers. I find interacting with people that I meet at conferences just as important as attending sessions to learn something new. There were a couple of items that really caught me eye, however, that I'll share here. Pragmatic Works. Whether you develop SSIS packages, build SSAS cubes, or author SSRS reports (or all of the above), you really must take a look at BI Documenter. Brian Knight (twitter) walked me through the key features, and I must say I was impressed. Once you've seen what this product can do, you won't want to document your BI projects any other way. You can download a free single-user database edition, or choose from more feature-rich standard or professional editions. Microsoft Press ebooks. I also stopped by the O'Reilly Media booth to meet some folks that one of my acquisitions editors at Microsoft Press recommended. In case you haven't heard, Microsoft Press has partnered with O'Reilly Media for distribution and publishing. Apart from my interest in learning more about O'Reilly Media as an author, an advertisement in their booth caught me eye which I think is a really great move. When you buy Microsoft Press ebooks through the O'Reilly web site, you can receive it in any (or all) of the following formats where possible: PDF, epub, .mobi for Kindle and .apk for Android. You also have lifetime DRM-free access to the ebooks. As someone who is an avid collector of books, I fnd myself running out of room for storage. In addition, I travel a lot, and it's hard to lug my reference library with me. Today's e-reader options make the move to digital books a more viable way to grow my library. Having a variety of formats means I am not limited to a single device, and lifetime access means I don't have to worry about keeping track of where I've stored my files. Because the e-books are DRM-free, I can copy and paste when I'm compiling notes, and I can print pages when necessary. That's a winning combination in my mind! Overall, I was pleased with the BI conference. There were many more sessions that I couldn't attend, either because the room was full when I got there or there were multiple sessions running concurrently that I wanted to see. Fortunately, many of the sessions are accessible for viewing online at http://www.msteched.com/2010/NorthAmerica along with the TechEd sessions. You can spot the BI sessions by the yellow skyline on the title slide of the presentation as shown below. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >