Search Results

Search found 941 results on 38 pages for 'tyler treat'.

Page 18/38 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Where is the SQL Azure Development Environment

    - by BuckWoody
    Recently I posted an entry explaining that you can develop in Windows Azure without having to connect to the main service on the Internet, using the Software Development Kit (SDK) which installs two emulators - one for compute and the other for storage. That brought up the question of the same kind of thing for SQL Azure. The short answer is that there isn’t one. While we’ll make the development experience for all versions of SQL Server, including SQL Azure more easy to write against, you can simply treat it as another edition of SQL Server. For instance, many of us use the SQL Server Developer Edition - which in versions up to 2008 is actually the Enterprise Edition - to develop our code. We might write that code against all kinds of environments, from SQL Express through Enterprise Edition. We know which features work on a certain edition, what T-SQL it supports and so on, and develop accordingly. We then test on the actual platform to ensure the code runs as expected. You can simply fold SQL Azure into that same development process. When you’re ready to deploy, if you’re using SQL Server Management Studio 2008 R2 or higher, you can script out the database when you’re done as a SQL Azure script (with change notifications where needed) by selecting the right “Engine Type” on the scripting panel: (Thanks to David Robinson for pointing this out and my co-worker Rick Shahid for the screen-shot - saved me firing up a VM this morning!) Will all this change? Will SSMS, “Data Dude” and other tools change to include SQL Azure? Well, I don’t have a specific roadmap for those tools, but we’re making big investments on Windows Azure and SQL Azure, so I can say that as time goes on, it will get easier. For now, make sure you know what features are and are not included in SQL Azure, and what T-SQL is supported. Here are a couple of references to help: General Guidelines and Limitations: http://msdn.microsoft.com/en-us/library/ee336245.aspx Transact-SQL Supported by SQL Azure: http://msdn.microsoft.com/en-us/library/ee336250.aspx SQL Azure Learning Plan: http://blogs.msdn.com/b/buckwoody/archive/2010/12/13/windows-azure-learning-plan-sql-azure.aspx

    Read the article

  • .NET CoffeeScript Handler

    - by Liam McLennan
    After more time than I care to admit I have finally released a rudimentary Http Handler for serving compiled CoffeeScript from Asp.Net applications. It was a long and painful road but I am glad to finally have a usable strategy for client-side scripting in CoffeeScript. Why CoffeeScript? As Douglas Crockford discussed in detail, Javascript is a mixture of good and bad features. The genius of CoffeeScript is to treat javascript in the browser as a virtual machine. By compiling to javascript CoffeeScript gets a clean slate to re-implement syntax, taking the best of javascript and ruby and combining them into a beautiful scripting language. The only limitation is that CoffeeScript cannot do anything that javascript cannot do. Here is an example from the CoffeeScript website. First, the coffeescript syntax: reverse: (string) -> string.split('').reverse().join '' alert reverse '.eeffoC yrT' and the javascript that it compiles to: var reverse; reverse = function(string) { return string.split('').reverse().join(''); }; alert(reverse('.eeffoC yrT')); Areas For Improvement ;) The current implementation is deeply flawed, however, at this point I’m just glad it works. When the server receives a request for a coffeescript file the following things happen: The CoffeeScriptHandler is invoked If the script has previously been compiled then the compiled version is returned. Else it writes a script file containing the CoffeeScript compiler and the requested coffee script The process shells out to CScript.exe to to execute the script. The resulting javascript is sent back to the browser. This outlandish process is necessary because I could not find a way to directly execute the coffeescript compiler from .NET. If anyone can help out with that I would appreciate it.

    Read the article

  • In Google webmaster tools, can a "soft 404" be triggered by the text on the page?

    - by Stephen Ostermiller
    I just ran across an error in Google Webmaster Tools that I have never seen before. I manage the website for my local community band (I play trombone). One of the pages on the site is a list of our upcoming performances. It is powered by a WordPress events plugin that uses a database of upcoming events that are entered through the administration interface. We just finished up our summer and fall concerts and our next performance will be our Christmas concert. I hadn't gotten around to adding that into the website yet, so there are no upcoming events shown on the page. In fact the text on the page says: No upcoming events listed under Performance. Check out past events for this category or view the full calendar. Then in Google Webmaster Tools, this page is showing up as a "soft 404": The page is returning a 200 status and Google is indicating that he 404 is "soft". I wouldn't have expected Googlebot to be as sophisticated to parse that particular sentence. Is Googlebot able to detect that the text on the page indicates that there is currently not content and then treat it as a 404 page because of that? If Google is treating this page as a soft 404 because of the text on the page, does that mean that like regular 404 pages, the page won't show up in search results?

    Read the article

  • Performance-Driven Development

    - by BuckWoody
    I was reading a blog yesterday about the evils of SELECT *. The author pointed out that it's almost always a bad idea to use SELECT * for a query, but in the case of SQL Azure (or any cloud database, for that matter) it's especially bad, since you're paying for each transmission that comes down the line. A very good point indeed. This got me to thinking - shouldn't we treat ALL programming that way? In other words, wouldn't it make sense to pretend that we are paying for every chunk of data - a little less for a bit, a lot more for a BLOB or VARCHAR(MAX), that sort of thing? In effect, we really are paying for that. Which led me to the thought of Performance-Driven Development, or the act of programming with the goal of having the fastest code from the very outset. This isn't an original title, since a quick Bing-search shows me a couple of offerings from Forrester and a professional in Israel who already used that title, but the general idea I'm thinking of is assigning a "cost" to each code round-trip, be it network, storage, trip time and other variables, and then rewarding the developers that come up with the fastest code. I wonder what kind of throughput and round-trip times you could get if your developers were paid on a scale of how fast the application performed... Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • How to stop middle button in page from trying to open whatever is in my clipboard in Firefox on Ubun

    - by therefromhere
    In Firefox on Ubuntu, if I middle-click anywhere on a page that's not a link, it seems to treat whatever text is in the clipboard as a URL and tries to load it. This is annoying, since if I either accidentally click the middle button or (more often) miss a link when trying to middle-click it, I'll either go to whatever URL is in my clipboard or get an alert saying: The URL is invalid and cannot be loaded Is there any way of either: a) Disabling this functionality so that middle-click on a non-link does nothing (maybe an about:config setting?, or b) Making the functionality more intelligent, so that it will only try and open text if it looks like a URL (this seems like a job for a plugin).

    Read the article

  • Continuous Integration using Docker

    - by Leon Mergen
    One of the main advantages of Docker is the isolated environment it brings, and I want to leverage that advantage in my continuous integration workflow. A "normal" CI workflow goes something like this: Poll repository for changes Pull from repository Install dependencies Run tests In a Dockerized workflow, it would be something like this: Poll repository for changes Pull from repository Build docker image Run docker image as container Run tests Kill docker container My problem is with the "run tests" step: since Docker is an isolated environment, intuitively I would like to treat it as one; this means the preferred method of communication are sockets. However, this only works well in certain situations (a webapp, for example). When testing different kind of services (for example, a background service that only communicated with a database), a different approach would be required. What is the best way to approach this problem? Is it a problem with my application's design, and should I design it in a more TDD, service-oriented way that always listens on some socket? Or should I just give up on isolation, and do something like this: Poll repository for changes Pull from repository Build docker image Run docker image as container Open SSH session into container Run tests Kill docker container SSH'ing into the container seems like an ugly solution to me, since it requires deep knowledge of the contents of the container, and thus break the isolation. I would love to hear SO's different approaches to this problem.

    Read the article

  • Web Application: Combining View Layer Between PHP and Javascript-AJAX

    - by wlz
    I'm developing web application using PHP with CodeIgniter MVC framework with a huge real time client-side functionality needs. This is my first time to build large scale of client-side app. So I combine the PHP with a large scale of Javascript modules in one project. As you already know, MVC framework seperate application modules into Model-View-Controller. My concern is about View layer. I could be display the data on the DOM by PHP built-in script tag by load some data on the Controller. Otherwise I could use AJAX to pulled the data -- treat the Controller like a service only -- and display the them by Javascript. Here is some visualization I could put the data directly from Controller: <label>Username</label> <input type="text" id="username" value="<?=$userData['username'];?>"><br /> <label>Date of birth</label> <input type="text" id="dob" value="<?=$userData['dob'];?>"><br /> <label>Address</label> <input type="text" id="address" value="<?=$userData['address'];?>"> Or pull them using AJAX: $.ajax({ type: "POST", url: config.indexURL + "user", dataType: "json", success: function(data) { $('#username').val(data.username); $('#dateOfBirth').val(data.dob); $('#address').val(data.address); } }); So, which approach is better regarding my application has a complex client-side functionality? In the other hand, PHP-CI has a default mechanism to put the data directly from Controller, so why using AJAX?

    Read the article

  • Idea of an algorithm to detect a website's navigation structure?

    - by Uwe Keim
    Currently I am in the process of developing an importer of any existing, arbitrary (static) HTML website into the upcoming release of our CMS. While the downloading the files is solved successfully, I'm pulling my hair off when it comes to detect a site structure (pages and subpages) purely from the HTML files, without the user specifying additional hints. Basically I want to get a tree like: + Root page 1 + Child page 1 + Child page 2 + Child child page1 + Child page 3 + Root page 2 + Child page 4 + Root page 3 + ... I.e. I want to be able to detect the menu structure from the links inside the pages. This has not to be 100% accurate, but at least I want to achieve more than just a flat list. I thought of looking at multiple pages to see similar areas and identify these as menu areas and parse the links there, but after all I'm not that satisfied with this idea. My question: Can you imagine any algorithm when it comes to detecting such a structure? Update 1: What I'm looking for is not a web spider, but an algorithm do create a logical tree of the relationship of the pages to be able to create pages and subpages inside my CMS when importing them. Update 2: As of Robert's suggestion I'll solve this by starting at the root page, and then simply parse links as you go and treat every link inside a page simply as a child page. Probably I'll recurse not in a deep-first manner but rather in a breadth-first manner to get a more balanced navigation structure.

    Read the article

  • Problems installing Linux to IDE connected compact flash card

    - by mathematician1975
    I have been trying to install Ubuntu on some hardware (Netcom NC-499 board that contains a Vortex86DX processor). I am trying to install to a compact flash card attached to the board via an IDE adaptor, the aim being that the board will boot up and simply treat the compact flash like a normal hard drive. The processor vendor claims support for Ubuntu 10.04 but I am having problems installing it onto the card. I have been trying using a USB CD-ROM drive and the standard .iso image from the ubuntu site (md5 checksum works out fine so no problem there) but I have had no success at all. I have been able to do this with Ubuntu 8.04 but with no other version (9.04 and 10.04 desktop and alternative discs all fail). My question is what other options are available to me to try and install this? I have googled myself apart trying to find out but other than a few sites describing USB based installs using flash memory sticks for very specific hardware, I have found no useful info at all. Any suggestions will be gratefully received.

    Read the article

  • Skipping hardlinks when using TSM Backup

    - by Lars Haugseth
    We need to backup a filesystem with lots of hardlinks. Since there are several hardlinks for each "true" file, we would like to skip all the hardlinks when backing up the filesystem to avoid n exact copies of each file. The backup is done using Tivoli Storage Manager Backup, and we've been unable to get it to treat hardlinks as anything other than separate files to be backed up alongside each other. In case it's relevant for possible solutions, I'd like to note that it's possible to tell a hardlink from a proper file by the filename: foobarbaz-123.ext # file foobarbaz-123-1.ext # hardlink foobarbaz-123-2.ext # hardlink barbazfoo-456.ext # file barbazfoo-456-1.ext # hardlink barbazfoo-456-2.ext # hardlink barbazfoo-456-3.ext # hardlink That is, all hardlinks have two hyphens in the filename, where as proper files have just the one. The server is running Ubuntu Linux, and the files are situated on a gfs volume on our SAN.

    Read the article

  • How do I stop Firefox on Ubuntu from trying to load whatever is in my clipboard when I middle-click

    - by therefromhere
    In Firefox on Ubuntu, if I middle-click anywhere on a page that's not a link, it seems to treat whatever text is in the clipboard as a URL and tries to load it. This is annoying, since if I either accidentally click the middle button or (more often) miss a link when trying to middle-click it, I'll either go to whatever URL is in my clipboard or get an alert saying: The URL is invalid and cannot be loaded Is there any way of either: a) Disabling this functionality so that middle-click on a non-link does nothing (maybe an about:config setting?, or b) Making the functionality more intelligent, so that it will only try and open text if it looks like a URL (this seems like a job for a plugin).

    Read the article

  • Stereo images rectification and disparity: which algorithms?

    - by alessandro.francesconi
    I'm trying to figure out what are currently the two most efficent algorithms that permit, starting from a L/R pair of stereo images created using a traditional camera (so affected by some epipolar lines misalignment), to produce a pair of adjusted images plus their depth information by looking at their disparity. Actually I've found lots of papers about these two methods, like: "Computing Rectifying Homographies for Stereo Vision" (Zhang - seems one of the best for rectification only) "Three-step image recti?cation" (Monasse) "Rectification and Disparity" (slideshow by Navab) "A fast area-based stereo matching algorithm" (Di Stefano - seems a bit inaccurate) "Computing Visual Correspondence with Occlusions via Graph Cuts" (Kolmogorov - this one produces a very good disparity map, with also occlusion informations, but is it efficient?) "Dense Disparity Map Estimation Respecting Image Discontinuities" (Alvarez - toooo long for a first review) Anyone could please give me some advices for orienting into this wide topic? What kind of algorithm/method should I treat first, considering that I'll work on a very simple input: a pair of left and right images and nothing else, no more information (some papers are based on additional, pre-taken, calibration infos)? Speaking about working implementations, the only interesting results I've seen so far belongs to this piece of software, but only for automatic rectification, not disparity: http://stereo.jpn.org/eng/stphmkr/index.html I tried the "auto-adjustment" feature and seems really effective. Too bad there is no source code...

    Read the article

  • mod_jk fails to detect error state because JBoss gives 404, not 500

    - by Ilya Sher
    Configuration: Apache + mod_jk, several workers on other machines (load balancing). When JBoss fails to deploy an application for example because of failed connection to the database, requests to /myapp/somepage generate 404. How do I configure JBoss to return 500 for everything under /myapp when the application failed to deploy? Additional info: Since 404 is not an "error state" code, mod_jk does not mark the worker as failed and continues to route the traffic there. Since there might be valid requests to this application also generating 404, I can not configure mod_jk to treat 404 as an "error state"

    Read the article

  • Security Goes Underground

    - by BuckWoody
    You might not have heard of as many data breaches recently as in the past. As you’re probably aware, I call them out here as often as I can, especially the big ones in government and medical institutions, because I believe those can have lasting implications on a person’s life. I think that my data is personal – and I’ve seen the impact of someone having their identity stolen. It’s a brutal experience that I wouldn’t wish on anyone. So with all of that it stands to reason that I hold the data professionals to the highest standards on security. I think your first role is to ensure the data you have, number one because it can be so harmful, and number two because it isn’t yours. It belongs to the person that has that data. You might think I’m happy about that downturn in reported data losses. Well, I was, until I learned that companies have realized they suffer a lowering of their stock when they report it, but not when they don’t. So, since we all do what we are measured on, they don’t. So now, not only are they not protecting your information, they are hiding the fact that they are losing it. So take this as a personal challenge. Make sure you have a security audit on your data, and treat any breach like a personal failure. We’re the gatekeepers, so let’s keep the gates. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Warnings When Undo Isn't Possible

    - by ultan o'broin
    Enjoyed this post Never Use a Warning When you Mean Undo by Aza Raskin. It makes sense never to warn users if an undo option is possible. The examples given are from the web space. Here's the conclusion: Warnings cause us to lose our work, to mistrust our computers, and to blame ourselves. A simple but foolproof design methodology solves the problem: "Never use a warning when you mean undo." And when a user is deleting their work, you always mean undo. However, in enterprise apps you may find that an undo option isn't technically possible or desirable. Objects may be shared, part of a flow elsewhere, or undoing something committed to the database (a rollback I guess) may not be feasible if it becomes locked by another process. Plus, what constitutes user ownership of objects isn't always clear to users. The implications of delete (and other) actions need to be clearly communicated out in advance. Really, warnings are important in the enterprise space. Data has a very high value, and users can perform a wide variety of actions that may risk that data, not always within the application itself (at browser level, for example). That said, throwing warnings all over the place when an undo option is possible is annoying. Instead, treat warnings with respect. When there is no undo option possible, use warning messages to communicate potentially dangerous or irrecoverable actions or the downstream consequences of user actions on the process or task flow. Force the user to respond to a warning message by using a modal dialog with clearly labeled action buttons. Here's a couple of examples. A great article that got me thinking. Let's see more like that. Let's not forget there's more types of messages than just error messages. User assistance and user experience professionals need to understand when best to use confirmation, information, and warning types too!

    Read the article

  • How to ignore certain coding standard errors in PHP CodeSniffer

    - by Tom
    We have a PHP 5 web application and we're currently evaluating PHP CodeSniffer in order to decide whether forcing code standards improves code quality without causing too much of a headache. If it seems good we will add a SVN pre-commit hook to ensure all new files committed on the dev branch are free from coding standard smells. Is there a way to configure PHP codeSniffer to ignore a particular type of error? or get it to treat a certain error as a warning instead? Here an example to demonstrate the issue: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> </head> <body> <div> <?php echo getTabContent('Programming', 1, $numX, $numY); if (isset($msg)) { echo $msg; } ?> </div> </body> </html> And this is the output of PHP_CodeSniffer: > phpcs test.php -------------------------------------------------------------------------------- FOUND 2 ERROR(S) AND 1 WARNING(S) AFFECTING 3 LINE(S) -------------------------------------------------------------------------------- 1 | WARNING | Line exceeds 85 characters; contains 121 characters 9 | ERROR | Missing file doc comment 11 | ERROR | Line indented incorrectly; expected 0 spaces, found 4 -------------------------------------------------------------------------------- I have a issue with the "Line indented incorrectly" error. I guess it happens because I am mixing the PHP indentation with the HTML indentation. But this makes it more readable doesn't it? (taking into account that I don't have the resouces to move to a MVC framework right now). So I'd like to ignore it please.

    Read the article

  • Configure Courier IMAP to deliver mail to multiple hostnames

    - by vy32
    I have a courier IMAP server running on a private server at dreamhost. The private server's hostname is psxxxx.dreamhostps.com. I also have CNAME for the private server, call it mydomain.com. I want to send email to [email protected] and have it delivered to [email protected]. Right now the Courer server on the private server is bouncing the mail. On other mail servers there is a file into which you put all of the names that the host responds to. The names are all synonyms for the host's name, so user@ are equivillent. How do I configure Courer to treat multiple hostnames as synonyms for its host name? Thank you.

    Read the article

  • OpenWorld Approaching... A few opportunities to share your needs with Oracle

    - by RichMill
    At OpenWorld from Monday the 1st to Wed. the 3rd. The My Oracle Support and Enterprise Manager user research team will be in action.  If you are someone who does patching, edits configurations, or uses either MOS configuration management (the collector) OR Enterprise Manager configuration compare or search, we have a treat for you!  Come give us your feedback on how you do your tasks, what needs you have, and how we can do better in this space. We will be doing this during OOW, but an OOW badge is not required to participate.  OR If you are someone who downloads large amounts of software (say, the entire EBS stack) and wants to understand how one customize a "recommended" stack of software for yourself, or your customers, let us know!  We have a study looking at how to create, customize and download all of the software needed for an installation. This will be done after OOW via webconference, so customers from anywhere in the world can participate. We want to hear from you, so we can get this right! E-mail us directly at [email protected] - or leave a comment with your email, so we can get your feedback into one or both of these two discussions. Hope you can participate!

    Read the article

  • Octree implementation for fustrum culling

    - by Manvis
    I'm learning modern (=3.1) OpenGL by coding a 3D turn based strategy game, using C++. The maps are composed of 100x90 3D hexagon tiles that range from 50 to 600 tris (20 different types) + any player units on those tiles. My current rendering technique involves sorting meshes by shaders they use (minimizing state changes) and then calling glDrawElementsInstanced() for drawing. Still get solid 16.6 ms/frame on my GTX 560Ti machine but the game struggles (45.45 ms/frame) on an old 8600GT card. I'm certain that using an octree and fustrum culling will help me here, but I have a few questions before I start implementing it: Is it OK for an octree node to have multiple meshes in it (e.g. can a soldier and the hex tile he's standing on end up in the same octree node)? How is one supposed to treat changes in object postion (e.g. several units are moving 3 hexes down)? I can't seem to find good a explanation on how to do it. As I've noticed, soting meshes by shaders is a really good way to save GPU. If I put node contents into, let's say, std::list and sort it before rendering, do you think I would gain any performance, or would it just create overhead on CPU's end? I know that this sounds like early optimization and implementing + testing would be the best way to find out, but perhaps someone knows from experience?

    Read the article

  • Setting up SVN+SSH for multiple users through one local user.

    - by Warlax
    Hi, I need to make our SVN repository accessible through the firewall - but without creating a local user for each potential external user. Instead, I would like to set-up SVN+SSH to route all external users through a single local user name. We would like each external user to authenticate with SSH the regular way but then treat their instance of svnserve as if they're all that single local user and possibly, control what parts of the repository each external user can access. I know that I will need to set my svnserve config according to the official guide. I tried, but the instructions are fuzzy and I am relatively a Linux n00b. What exactly are the steps to proceed? and how would you go about testing this? Thanks for your help.

    Read the article

  • Tiling window manager where dual heads share common workspace

    - by mikero
    I want to use a tiling window manager with my dual monitor setup, but almost all wms seem to treat each monitor as an independent workspace. This means that I can change the workspace of monitor 0 without affecting the workspace of monitor 1. This is not what I want -- I want a workspace to span both monitors, where each monitor is essentially a separate column for tiling (my monitors are oriented vertically, so they are well-suited as tiling columns). When I switch workspaces, say with Mod-[0-9], I want both monitors to change contents. So far the only wm I have found to support this is wmii, but I'd love to try some other options. Have I missed this capability from other tiling wms?

    Read the article

  • IE9 and the Mystery of the Broken Video Tag

    - by David Wesst
    I was very excited when Microsoft released the Internet Explorer 9 Release Candidate. As far as I was concerned, this was another nail in the coffin for IE6 and step in the right direction for us .NET web developers as our base camp was finally starting to support the latest and greatest future-web standards. Unfortunately, my celebration was short lived as I soon hit a snag while loading up an HTML5 site I was building in Visual Studio 2010. The Mystery After updating Internet Explorer, I ran my HTML5 site that had the oh-so-lovely HTML5 video tag showing a video. Even though this worked in IE9 Beta, it appeared that IE9 RC could not load the same file. I figured that it was the video codec. Maybe IE9 RC no longer supported the video codec I used to encode my video. Here's the code I used: <video width="854" height="480" id="myOtherVideo" autoplay="" controls=""> <source src="/DemoSite1/Media/big_buck_bunny.mp4"/> <div> <p>Your browser does not support HTML5 Video.</p> </div> </video> As you can see from the code, I had the "fail-safe" code inside the video tag. The idea there being that if the video tag, or the video files themselves, are not supported by the browser my video should fail gracefully. What was even more strange was the fact that it worked in all the other HTML5 browsers that supported video. The Investigation Whoa! DJ stop the music. How can any of that make sense? Would the IE team really take such huge strides forward only to forget to include a feature that was already in the beta? I don't think so. I did plenty of searching on the web and asking around on the web, but could not seem to find anyone else having the same problem. Eventually I came across this post talking about declaring the MIME type in the .htaccess file. That got me thinking: does my web server support the video MIME type? I was using VS2010, so how do I know what kind of MIME types are supported by default? Still, my page hosted in Cassini (the web development server in VS2010) works on the other browsers. Why wouldn't it work with IE9 RC? To answer that, it was time to open up the upgraded toolbox known as the Developer's Tools in IE9 and use the new Network Tab. The Conclusion If you take a closer look at the results displayed from the Network tab, you can see that IE9 RC has interpreted the video file as text/html rather than video/mp4. To make this work, I decided to use IIS to debug my HTML5 web application by setting the web project's properties. Then, I added the MIME types that I want to support (i.e. video/mp4, video/ogg, video/webm). Et voila! The Mystery of the Broken Video Tag is solved. After Thoughts After solving the mystery, I still had the question about why my site worked in Chrome, Safari, and Firefox 3.6. After asking around, the best answer that I received was from my colleague Tyler Doerksen. He said that IE9 likely depends on the server telling it what kind of file it is downloading rather than trying to read the metadata about the data it is trying to download before doing anything. I have no facts to back this up, but it makes sense to me. In a browser war where milliseconds can make your browser fall back a few places in the race for supremacy, maybe the IE team opted to depend on the server knowing what kind of content it is serving up. Makes sense to me. In any case, that is just an educated guess. If you have any comments, feel free to post on them below. This post also appears at http://david.wes.st

    Read the article

  • Inspiration

    - by Oracle Campus Blog
    Once again, I find myself back in Seoul – ASEM Tower, 16th Floor in a mobile room. I’m busy preparing for the interview process that is about to take place for Oracle Korea’s GIP 7th (Graduate Intake Program): scheduling the first round interviews, organizing interview guidelines, educating interviewers on the process and framework and  getting all the logistics ready for the 1st round interview. Seoul or Korea rather is a fascinating place. Highly efficient, the utmost respect for seniors and results orientated. When students come in for an interview at first it was hard to tell them apart – there seems to be accepted interview attire that must be worn when attending an interview. Males and Females, all dress in black suits, with white shirts underneath – with males to wear simple and dark colored ties. During the interview, they would all sit very upright, all would bow when entering the room, place their hands on the laps and very often they would hold minimal eye contact. They would project their voice loud to portray confidence, they would talk in the Korean formal dialect at all times and will treat every question, every moment with extreme clarity and the utmost professionalism. When the interview concludes, they will all stand hands by their sides, bow 90 degrees and thank all the interviewers for their precious time and opportunity. As soon as they leave the interview room, I could hear all their sighs of relief and commended each other on their efforts. More and more I learn about the Korean culture it inspires me. Their patriotism, their respect for each, their values, their appreciation, their motivation, their desires and passion – it truly was an experience for me (even as a recruiter) and can’t help but feel truly impressed and motivated to live for every moment. Philip Yi     Oracle Campus Recruiter 

    Read the article

  • Architecture Standards &ndash; BPMN vs. BPEL for Business Process Management

    - by pat.shepherd
    I get asked often which business process standard an organization should use; BPMN or BPEL?  As I explain to folks, they both have strengths.  Here is a great article that helps understand the benefits of both and where to use them.  The good news is that, with Oracle SOA Suite and BPM suite, you have the option and flexibility to use both in the same SCA model and runtime container.  Good stuff. Here is the great article that Mark Nelson wrote: The right tool for the right job BPEL and BPMN are both ‘languages’ or ‘notations’ for describing and executing business processes. Both are open standards. Most business process engines will support one or the other of these languages. Oracle however has chosen to support both and treat them as equals. This means that you have the freedom to choose which language to use on a process by process basis. And you can freely mix and match, even within a single composite. (A composite is the deployment unit in an SCA environment.) So why support both? Well it turns out that BPEL is really well suited to modeling some kinds of processes and BPMN is really well suited to modeling other kinds of processes. Of course there is a pretty significant overlap where either will do a great job What BPM adds to SOA Suite | RedStack

    Read the article

  • Removing specific part of filename (what's after the second dash) for all files in a folder

    - by Bodo
    I use the command line utility youtube-dl to download videos from YouTube and make mp3s from them with avconv. I'm doing this under Ubuntu 14.04 and very happy with it. The utility downloads the files and saves them with the following name scheme: TITLE(artist-track)-ID.mp3 So an actual filename looks like: EPIC RAP BATTLE of MANLINESS-_EzDRpkfaO4.mp3 Some other file names in the folder look like: EPIC RAP BATTLE of MANLINESS-_EzDRpkfaO4.mp3 Martin Garrix - Animals (Official Video)-gCYcHz2k5x0.mp3 Stromae - Papaoutai-oiKj0Z_Xnjc.mp3 At first, this was no problem. It didn't bother me while listening to my music in Rhytmbox. But when moving to phone or other devices it is pretty confusing to see a so long name, and some players, like the Samsung ones, treat that last part (id after second dash) of the name as Album or something. I'd like to create a bash script that removes what's after the second dash in the name for all files, so it'll make them like this: From: Martin Garrix - Animals (Official Video)-gCYcHz2k5x0.mp3 To: Martin Garrix - Animals (Official Video).mp3 Is it also possible to instruct youtube-dl to exclude the ID from now on? I am currently downloading with the command: youtube-dl --extract-audio --audio-quality 0 --audio-format mp3 URL

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >