Search Results

Search found 29588 results on 1184 pages for 'really'.

Page 147/1184 | < Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >

  • Why does it take so long to finalize the HTML 5 spec?

    - by EpsilonVector
    I was reading this and one sentence caught my eye (emphasis mine): So Ian Hickson, XHTML’s biggest critic, fathered HTML 5, an action-oriented toddler specification that won’t reach adulthood until 2022, although some of it can be used today. Is that true? Is that really the HTML 5 development cycle? Why is it taking so long? What makes it so difficult to get right that it won't be final until 11 years from now?

    Read the article

  • Do you Kung Foo? [closed]

    - by Darknight
    As a programmer who also trains in martial arts (Wing Tsun + Escrima + Judo). I am really interested to know if there are other fellow programmers who also practice martial arts. Do you practice martial arts? or do you know of programmers who practice martial arts. Further question: Do you see any analogy between your art and programming. [Apologies for the terrible pun in the title :) but I do like it cheesy...]

    Read the article

  • how to convert avi (xvid) to mkv or mp4 (h264)

    - by bcsteeve
    Very noob when it comes to video. I'm trying to make sense of what I"m finding via Google... but its mostly Greek to me. I have a bunch of Avi files that won't play in my WD TV Play box. Mediainfo tells me they are xvid. Specs for the box show that should be fine... but digging through forums says its hit-and-miss. So I'd like to try converting them to h264 encoded MKV or mp4 files. I gather avconv is the tool, but reading the manual just has me really really confused. I tried the very basic example of: avconv -i file.avi -c copy file.mp4 it took less than 4 seconds. And it worked... sort of. It "played" in that something came up on the screen... but there was horrible artifacting and scenes would just sort of melt into each other. I want to preserve quality if possible. I'm not concerned about file size. I'm not terribly concerned with the time it takes either, provided I can do them in a batch. Can someone familiar with the process please give me a command with the options? Thank you for your help. I'm posting the mediainfo in case it helps: General Complete name : \\SERVER\Video\Public\test.avi Format : AVI Format/Info : Audio Video Interleave File size : 189 MiB Duration : 11mn 18s Overall bit rate : 2 335 Kbps Writing application : Lavf52.32.0 Video ID : 0 Format : MPEG-4 Visual Format profile : Advanced Simple@L5 Format settings, BVOP : 2 Format settings, QPel : No Format settings, GMC : No warppoints Format settings, Matrix : Default (H.263) Muxing mode : Packed bitstream Codec ID : XVID Codec ID/Hint : XviD Duration : 11mn 18s Bit rate : 2 129 Kbps Width : 720 pixels Height : 480 pixels Display aspect ratio : 16:9 Frame rate : 29.970 fps Standard : NTSC Color space : YUV Chroma subsampling : 4:2:0 Bit depth : 8 bits Scan type : Progressive Compression mode : Lossy Bits/(Pixel*Frame) : 0.206 Stream size : 172 MiB (91%) Writing library : XviD 1.2.1 (UTC 2008-12-04) Audio ID : 1 Format : MPEG Audio Format version : Version 1 Format profile : Layer 3 Mode : Joint stereo Mode extension : MS Stereo Codec ID : 55 Codec ID/Hint : MP3 Duration : 11mn 18s Bit rate mode : Constant Bit rate : 192 Kbps Channel(s) : 2 channels Sampling rate : 48.0 KHz Compression mode : Lossy Stream size : 15.5 MiB (8%) Alignment : Aligned on interleaves Interleave, duration : 24 ms (0.72 video frame)

    Read the article

  • Music Notation Editor - Refactoring view creation logic elseware

    - by Cyril Silverman
    Let me preface by saying that knowing some elementary music theory and music notation may be helpful in grasping the problem at hand. I'm currently building a Music Notation and Tablature Editor (in Javascript). But I've come to a point where the core parts of the program are more or less there. All functionality I plan to add at this point will really build off the foundation that I've created. As a result, I want to refactor to really solidify my code. I'm using an API called VexFlow to render notation. Basically I pass the parts of the editor's state to VexFlow to build the graphical representation of the score. Here is a rough and stripped down UML diagram showing you the outline of my program: In essence, a Part has many Measures which has many Notes which has many NoteItems (yes, this is semantically weird, as a chord is represented as a Note with multiple NoteItems, individual pitches or fret positions). All of the relationships are bi-directional. There are a few problems with my design because my Measure class contains the majority of the entire application view logic. The class holds the data about all VexFlow objects (the graphical representation of the score). It contains the graphical Staff object and the graphical notes. (Shouldn't these be placed somewhere else in the program?) While VexFlowFactory deals with actual creation (and some processing) of most of the VexFlow objects, Measure still "directs" the creation of all the objects and what order they are supposed to be created in for both the VexFlowStaff and VexFlowNotes. I'm not looking for a specific answer as you'd need a much deeper understanding of my code. Just a general direction to go in. Here's a thought I had, create an MeasureView/NoteView/PartView classes that contains the basic VexFlow objects for each class in addition to any extraneous logic for it's creation? but where would these views be contained? Do I create a ScoreView that is a parallel graphical representation of everything? So that ScoreView.render() would cascade down PartView and call render for each PartView and casade down into each MeasureView, etc. Again, I just have no idea what direction to go in. The more I think about it, the more ways to go seem to pop into my head. I tried to be as concise and simplistic as possible while still getting my problem across. Please feel free to ask me any questions if anything is unclear. It's quite a struggle trying to dumb down a complicated problem to its core parts.

    Read the article

  • SQL Server is now supported by phpBB!

    - by The Official Microsoft IIS Site
    Our team is really excited to announce the new release of phpBB 3.0.7-PL1 by the phpBB community that supports SQL Server, and one can download it from the Web Application Gallery for a very easy install!! But let’s step back for a moment and provide some background. Microsoft’s Interoperability team has been working with a few PHP projects to support SQL Server using our driver, phpBB was one of them. Although phpBB already had some support for SQL Server / Access, our 1.1 release driver offered...(read more)

    Read the article

  • What is Advanced Search Operators?

    Search engines have set up further tools referred to as advanced search operators to provide power users possibly far more control when searching. Advanced search operators are distinctive phrases which you could insert in your search query for you to come across unique sorts of details of which the common search are not able to offer. A number of of those operators provide beneficial resources for Search engines gurus and other people who want really special data, or perhaps who wish to minimize their particular search to very specific source.

    Read the article

  • Commerce Anywhere...Where the Web, Store, Mobile, Social and Call Center Come Together

    - by divya.malik
    I am pleased to introduce guest blogger, Bill Zujewski today. Bill has just joined the Oracle CRM Product Marketing team as part of our recent ATG acquisition. Based in Cambridge, MA Bill was the VP of Product Marketing for ATG and collaborated on eCommerce strategy with some of the best brands in the world. Welcome Bill!! BY BILL ZUJEWSKI "Times are a changing"...or so the song goes. Not long ago, eCommerce just meant having a cool brand and a slick website. Today, customers expect much more... what I think they really want...Commerce Anywhere...a seamless, consistent and personal way to interact or transact business with you and your products, whether they start on the web, go into a store, talk over the phone, access products via their mobile device or on their favorite social media site. They want one more thing... for you to remember them and their history with you... so they can be treated more intelligently and not have to repeat previous interactions. It makes sense to me, I want it too... it saves me time and money. I work with many companies that are trying to understand how to evolve their business structure and technology solutions to meet the challenges of Commerce Anywhere. My advice ... think differently and take a more holistic approach to the customer experience and the cross-channel selling solution. Stop integrating siloed legacy systems and start thinking about a single platform as your new foundation... the e-Commerce platform. I recently wrote a new white paper, Commerce Anywhere - A Business and Technology ! Strategy to Maximize Cross- channel Commerce Growth to help our customers better understand how to create that "Commerce Anywhere" customer experience that customers really want. The paper offers practical insights into an IT transformation that can help you leverage a commerce platform to go beyond the web store front and instead use it to enable rapid expansion into mobile apps, new in-store apps, and interact with your customers through social commerce. Let me know what you think by posting a comment on this blog.

    Read the article

  • Oracle Gave Me a Chance - ECEMEA Internship Programme

    - by FelixWehmeyer
    My name is Mohammad Raad and I started in the One Year Training program with Oracle on March 1, 2012. I graduated on September 2011 and started searching for a job. Starting your career, as you all know, is hard because some companies see you as a fresh graduate lacking experience and no one is willing to invest in you. I used to check the recruitment websites daily to see if there were openings to apply to, but unfortunately no one wanted to hire a zero year experience fresh graduate. One day I saw Oracle’s 1 year internship program advertised and that was what I needed. I applied but expected nothing to happen because I was used to applying and getting no replies, but they called and that was the start! I had my first interview over the phone and decided to go to Qatar and to continue to search for a job. Two weeks after arriving at Qatar, Oracle called me for a second interview in Lebanon so I booked a ticket on the same day because my interview was the next day I had my interview and went back to Qatar. On January 2012, I heard from Oracle that I was accepted and they choose me for this program, it was a day I will not forget! Starting on 1st March 2012, I was full of energy, willing to do anything to gain experience and prove that I can do it. What really give me a big push is my colleagues’ motivation and support especially from Youssef Halawi, my mentor and Rami Mattar because they believe in me and track my progress day-by-day. At Oracle, I meet customers, attend meetings, demos and presentations, partners’ events and online trainings. Now I’m focusing on a specific product that I really liked and aim to master by the end of my internship. So dear readers wish me good luck! I know that I will get the experience that I want, because from Oracle, a leader in its industry, you have the chance to grab experience as much as you can handle, simply because there are no limits to excellence. Do you want to find out more about the open roles within Oracle? Follow us on https://campus.oracle.com.

    Read the article

  • Google webmaster Index Status. Total Indexed=0

    - by hammad
    I previously changed my domain from www.visualstudiolearn.blogspot.com to www.visualstudiolearn.com... i had around 300 posts with the previous domain name and most of them where showing up on Google. Now that i have changed my domain name the index status shows total indexed as 0 and when i go to the advanced tab it says 304(not selected) and 217 blocked my robots. Im really depressed because of this situation. could you please help out???

    Read the article

  • APress Deal of the Day 19/Oct/2013 - Software Projects Secrets Why Projects Fail

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2013/10/19/apress-deal-of-the-day-19oct2013---software-projects-secrets.aspxTod\y's $10 deal of the day from APress at http://www.apress.com/9781430251019 is Software Projects Secrets Why Projects Fail "Software Project Secrets: Why Software Projects Fail airs dirty laundry about the software industry—how putting project management's priorities above all else is the root cause of problems in software development projects. This book offers solutions to integrate project management with agile methodologies that really work for software development."

    Read the article

  • How to effectively use an overseas SEO team?

    - by Dan Gayle
    My company is currently in contract with a 20+ person team in the Philippines, previously used for comment linking and guest blogging spun content articles. This is a practice that we're stopping, but we don't want to sever our team because they work hard, they're really cheap, and they produce excellent accounting and reporting of their actions. What are ways that we can best put them to use as a link generating or content generating resource? Their English is fair, but not of high enough quality to use them for any direct content creation. Thanks

    Read the article

  • Stuff you should have learned in school but didn't pay attention to at the time

    - by HLGEM
    This question made me think that there was a better question to ask. What did you learn in school that you didn't care about at the time, but turned out to be useful or you had to relearn in the workplace because you had it in school, but didn't retain the information and you needed it? (I mean for software related jobs.) I think this might help college students identify some of what they really should be paying attention to while they are in school.

    Read the article

  • links for 2011-02-21

    - by Bob Rhubart
    Calling all enterprise architects | Enterprise architecture - InfoWorld Nominations are now open for the 2011 InfoWorld Enterprise Architecture Award, honoring companies whose enterprise architecture initiatives made a difference (tags: ping.fm) Red Tape, Part II : OTN Garage "How do you back up all of that storage? Tape: really fast tape. And, lots of it. This creates a whole variety of very interesting challenges today, elevating the topic to – at the very least – glamorous, but I think it qualifies as being downright hot!" - Kemer Thomson (tags: oracle entarch datastorage) The Buttso Blathers: Using Secure Config Files with the WebLogic Maven Plugin "WebLogic Server has long had a mechanism to provide a more secure way of connecting to the Administration Server from client utilities such that the username and password do not need to be specified and therefore can’t be seen from the process list or command shell history." (tags: oracle weblogic) World-class EA | Open Group Blog "World-class Enterprise Architecture is all about creating definitive collateral that defines how the architecture delivers value for societal value." - Mick Adams (tags: enterprisearchitecture entarch opengroup) Enterprise Process Maps: A Process Picture worth a Million Words (Telecommunications Architecture Corner) "Every BPM project (holistic BPM kick-off, enterprise system implementation, Service-oriented Architecture, business process transformation, corporate performance management, etc.) should be begin with a clear understanding of the business environment..." - Raul Goycoolea (tags: oracle otn telecommunications businessprocess entarch bpm) Andrejus Baranovskis's Blog: WebCenter PS3 Customization Manager- Long Awaited Feature for MDS Oracle ACE Director Andrejus Baranovski shares "really great news for those of you who are working on MDS personalization and customization support in Oracle Fusion Middleware applications." (tags: oracle otn oracleace webcenter enterprise2.0) Oracle WebCenter: Common User Experience Architecture (Oracle Enterprise 2.0 Blog) Kellsey Ruppel describes "how the new release of Oracle WebCenter delivers a Common User Experience Architecture." (tags: oracle otn webcenter enterprise2.0) Java / Oracle SOA blog: Do your SOA deployments & configuration with AIA Oracle ACE Edwin Biemond illustrates the use of the SOA Suite / FMW deployment framework, "one of the Application Integration Architecture (AIA) hidden gems." (tags: oracle oracleace soa otn fusionmiddleware) Enterprise Software Development with Java: Clustering Stateful Session Beans with GlassFish 3.1 Oracle ACE Director Markus Eisele describes what he did "to get a Stateful Session Bean failover scenario working with two instances on one node." (tags: oracle otn oracleace glassfish) Enhanced REST Support in Oracle Service Bus 11gR1 (SOA Thinker) Jeff Davies illustrates how to re-implement the REST-ful Products services using query strings for passing parameter information. (tags: oracle otn soa REST)

    Read the article

  • The 4 'P's of SEO

    Search engine optimization (SEO) and its role in building a successful online business is about the 4 'P's. Passion, personal relationships, process and persistence. Like everything in life, rewards follow process and persistence. But those alone will not work without good personal relationships which build trust, and passion which is really about caring about what you are offering and to whom you are offering it.

    Read the article

  • Custom Java Web Development vs Spreadsheet

    - by jacktrades
    Need some arguments why a small business should prefer a custom web developed solution using relational database (e.g. Java Servlet + MySQL) over standard Spreadsheet user programs like Excel. Specially now in these days that Office 365 is available in the cloud. As a Java programmer need good arguments to convince clients that this approach is better (if it really is) This is a generic situation, I understand that each case is different. Nevertheless answers so far has pinpointed right answers.

    Read the article

  • Free Document/Content Management System Using SharePoint 2010

    - by KunaalKapoor
    That’s right, it’s true. You can use the free version of SharePoint 2010 to meet your document and content management needs and even run your public facing website or an internal knowledge bank.  SharePoint Foundation 2010 is free. It may not have all the features that you get in the enterprise license but it still has enough to cater to your needs to build a document management system and replace age old file shares or folders. I’ve built a dozen content management sites for internal and public use exploiting SharePoint. There are hundreds of web content management systems out there (see CMS Matrix).  On one hand we have commercial platforms like SharePoint, SiteCore, and Ektron etc. which are the most frequently used and on the other hand there are free options like WordPress, Drupal, Joomla, and Plone etc. which are pretty common popular as well. But I would be very surprised if anyone was able to find a single CMS platform that is all things to all people. Infact not a lot of people consider SharePoint’s free version under the free CMS side but its high time organizations benefit from this. Through this blog post I wanted to present SharePoint Foundation as an option for running a FREE CMS platform. Even if you knew that there is a free version of SharePoint, what most people don’t realize is that SharePoint Foundation is a great option for running web sites of all kinds – not just team sites. It is a great option for many reasons, but in reality it is supported by Microsoft, and above all it is FREE (yay!), and it is extremely easy to get started.  From a functionality perspective – it’s hard to beat SharePoint. Even the free version, SharePoint Foundation, offers simple data connectivity (through BCS), cross browser support, accessibility, support for Office Web Apps, blogs, wikis, templates, document support, health analyzer, support for presence, and MUCH more.I often get asked: “Can I use SharePoint 2010 as a document management system?” The answer really depends on ·          What are your specific requirements? ·          What systems you currently have in place for managing documents. ·          And of course how much money you have J Benefits? Not many large organizations have benefited from SharePoint yet. For some it has been an IT project to see what they can achieve with it, for others it has been used as a collaborative platform or in many cases an extended intranet. SharePoint 2010 has changed the game slightly as the improvements that Microsoft have made have been noted by organizations, and we are seeing a lot of companies starting to build specific business applications using SharePoint as the basis, and nearly every business process will require documents at some stage. If you require a document management system and have SharePoint in place then it can be a relatively straight forward decision to use SharePoint, as long as you have reviewed the considerations just discussed. The collaborative nature of SharePoint 2010 is also a massive advantage, as specific departmental or project sites can be created quickly and easily that allow workers to interact in a variety of different ways using one source of information.  This also benefits an organization with regards to how they manage the knowledge that they have, as if all of their information is in one source then it is naturally easier to search and manage. Is SharePoint right for your organization? As just discussed, this can only be determined after defining your requirements and also planning a longer term strategy for how you will manage your documents and information. A key factor to look at is how the users would interact with the system and how much value would it get for your organization. The amount of data and documents that organizations are creating is increasing rapidly each year. Therefore the ability to archive this information, whilst keeping the ability to know what you have and where it is, is vital to any organizations management of their information life cycle. SharePoint is best used for the initial life of business documents where they need to be referenced and accessed after time. It is often beneficial to archive these to overcome for storage and performance issues. FREE CMS – SharePoint, Really? In order to show some of the completely of what comes with this free version of SharePoint 2010, I thought it would make sense to use Wikipedia (since every one trusts it as a credible source). Wikipedia shows that a web content management system typically has the following components: Document Management:   -       CMS software may provide a means of managing the life cycle of a document from initial creation time, through revisions, publication, archive, and document destruction. SharePoint is king when it comes to document management.  Version history, exclusive check-out, security, publication, workflow, and so much more.  Content Virtualization:   -       CMS software may provide a means of allowing each user to work within a virtual copy of the entire Web site, document set, and/or code base. This enables changes to multiple interdependent resources to be viewed and/or executed in-context prior to submission. Through the use of versioning, each content manager can preview, publish, and roll-back content of pages, wiki entries, blog posts, documents, or any other type of content stored in SharePoint.  The idea of each user having an entire copy of the website virtualized is a bit odd to me – not sure why anyone would need that for anything but the simplest of websites. Automated Templates:   -       Create standard output templates that can be automatically applied to new and existing content, allowing the appearance of all content to be changed from one central place. Through the use of Master Pages and Themes, SharePoint provides the ability to change the entire look and feel of site.  Of course, the older brother version of SharePoint – SharePoint Server 2010 – also introduces the concept of Page Layouts which allows page template level customization and even switching the layout of an individual page using different page templates.  I think many organizations really think they want this but rarely end up using this bit of functionality.  Easy Edits:   -       Once content is separated from the visual presentation of a site, it usually becomes much easier and quicker to edit and manipulate. Most WCMS software includes WYSIWYG editing tools allowing non-technical individuals to create and edit content. This is probably easier described with a screen cap of a vanilla SharePoint Foundation page in edit mode.  Notice the page editing toolbar, the multiple layout options…  It’s actually easier to use than Microsoft Word. Workflow management: -       Workflow is the process of creating cycles of sequential and parallel tasks that must be accomplished in the CMS. For example, a content creator can submit a story, but it is not published until the copy editor cleans it up and the editor-in-chief approves it. Workflow, it’s in there. In fact, the same workflow engine is running under SharePoint Foundation that is running under the other versions of SharePoint.  The primary difference is that with SharePoint Foundation – you need to configure the workflows yourself.   Web Standards: -       Active WCMS software usually receives regular updates that include new feature sets and keep the system up to current web standards. SharePoint is in the fourth major iteration under Microsoft with the 2010 release.  In addition to the innovation that Microsoft continuously adds, you have the entire global ecosystem available. Scalable Expansion:   -       Available in most modern WCMSs is the ability to expand a single implementation (one installation on one server) across multiple domains. SharePoint Foundation can run multiple sites using multiple URLs on a single server install.  Even more powerful, SharePoint Foundation is scalable and can be part of a multi-server farm to ensure that it will handle any amount of traffic that can be thrown at it. Delegation & Security:  -       Some CMS software allows for various user groups to have limited privileges over specific content on the website, spreading out the responsibility of content management. SharePoint Foundation provides very granular security capabilities. Read @ http://msdn.microsoft.com/en-us/library/ee537811.aspx Content Syndication:  -       CMS software often assists in content distribution by generating RSS and Atom data feeds to other systems. They may also e-mail users when updates are available as part of the workflow process. SharePoint Foundation nails it.  With RSS syndication and email alerts available out of the box, content syndication is already in the platform. Multilingual Support: -       Ability to display content in multiple languages. SharePoint Foundation 2010 supports more than 40 languages. Read More Read more @ http://msdn.microsoft.com/en-us/library/dd776256(v=office.12).aspxYou can download the free version from http://www.microsoft.com/en-us/download/details.aspx?id=5970

    Read the article

  • Checking who is connected to your server, with PowerShell.

    - by Fatherjack
    There are many occasions when, as a DBA, you want to see who is connected to your SQL Server, along with how they are connecting and what sort of activities they are carrying out. I’m going to look at a couple of ways of getting this information and compare the effort required and the results achieved of each. SQL Server comes with a couple of stored procedures to help with this sort of task – sp_who and its undocumented counterpart sp_who2. There is also the pumped up version of these called sp_whoisactive, written by Adam Machanic which does way more than these procedures. I wholly recommend you try it out if you don’t already know how it works. When it comes to serious interrogation of your SQL Server activity then it is absolutely indispensable. Anyway, back to the point of this blog, we are going to look at getting the information from sp_who2 for a remote server. I wrote this Powershell script a week or so ago and was quietly happy with it for a while. I’m relatively new to Powershell so forgive both my rather low threshold for entertainment and the fact that something so simple is a moderate achievement for me. $Server = 'SERVERNAME' $SMOServer = New-Object Microsoft.SQLServer.Management.SMO.Server $Server # connection and query stuff         $ConnectionStr = "Server=$Server;Database=Master;Integrated Security=True" $Query = "EXEC sp_who2" $Connection = new-object system.Data.SQLClient.SQLConnection $Table = new-object "System.Data.DataTable" $Connection.connectionstring = $ConnectionStr try{ $Connection.open() $Command = $Connection.CreateCommand() $Command.commandtext = $Query $result = $Command.ExecuteReader() $Table.Load($result) } catch{ # Show error $error[0] | format-list -Force } $Title = "Data access processes (" + $Table.Rows.Count + ")" $Table | Out-GridView -Title $Title $Connection.close() So this is pretty straightforward, create an SMO object that represents our chosen server, define a connection to the database and a table object for the results when we get them, execute our query over the connection, load the results into our table object and then, if everything is error free display these results to the PowerShell grid viewer. The query simply gets the results of ‘EXEC sp_who2′ for us. Depending on how many connections there are will influence how long the query runs. The grid viewer lets me sort and search the results so it can be a pretty handy way to locate troublesome connections. Like I say, I was quite pleased with this, it seems a pretty simple script and was working well for me, I have added a few parameters to control the output and give me more specific details but then I see a script that uses the $SMOServer object itself to provide the process information and saves having to define the connection object and query specifications. $Server = 'SERVERNAME' $SMOServer = New-Object Microsoft.SQLServer.Management.SMO.Server $Server $Processes = $SMOServer.EnumProcesses() $Title = "SMO processes (" + $Processes.Rows.Count + ")" $Processes | Out-GridView -Title $Title Create the SMO object of our server and then call the EnumProcesses method to get all the process information from the server. Staggeringly simple! The results are a little different though. Some columns are the same and we can see the same basic information so my first thought was to which runs faster – so that I can get my results more quickly and also so that I place less stress on my server(s). PowerShell comes with a great way of testing this – the Measure-Command function. All you have to do is wrap your piece of code in Measure-Command {[your code here]} and it will spit out the time taken to execute the code. So, I placed both of the above methods of getting SQL Server process connections in two Measure-Command wrappers and pressed F5! The Powershell console goes blank for a while as the code is executed internally when Measure-Command is used but the grid viewer windows appear and the console shows this. You can take the output from Measure-Command and format it for easier reading but in a simple comparison like this we can simply cross refer the TotalMilliseconds values from the two result sets to see how the two methods performed. The query execution method (running EXEC sp_who2 ) is the first set of timings and the SMO EnumProcesses is the second. I have run these on a variety of servers and while the results vary from execution to execution I have never seen the SMO version slower than the other. The difference has varied and the time for both has ranged from sub-second as we see above to almost 5 seconds on other systems. This difference, I would suggest is partly due to the cost overhead of having to construct the data connection and so on where as the SMO EnumProcesses method has the connection to the server already in place and just needs to call back the process information. There is also the difference in the data sets to consider. Let’s take a look at what we get and where the two methods differ Query execution method (sp_who2) SMO EnumProcesses Description - Urn What looks like an XML or JSON representation of the server name and the process ID SPID Spid The process ID Status Status The status of the process Login Login The login name of the user executing the command HostName Host The name of the computer where the  process originated BlkBy BlockingSpid The SPID of a process that is blocking this one DBName Database The database that this process is connected to Command Command The type of command that is executing CPUTime Cpu The CPU activity related to this process DiskIO - The Disk IO activity related to this process LastBatch - The time the last batch was executed from this process. ProgramName Program The application that is facilitating the process connection to the SQL Server. SPID1 - In my experience this is always the same value as SPID. REQUESTID - In my experience this is always 0 - Name In my experience this is always the same value as SPID and so could be seen as analogous to SPID1 from sp_who2 - MemUsage An indication of the memory used by this process but I don’t know what it is measured in (bytes, Kb, Mb…) - IsSystem True or False depending on whether the process is internal to the SQL Server instance or has been created by an external connection requesting data. - ExecutionContextID In my experience this is always 0 so could be analogous to REQUESTID from sp_who2. Please note, these are my own very brief descriptions of these columns, detail can be found from MSDN for columns in the sp_who results here http://msdn.microsoft.com/en-GB/library/ms174313.aspx. Where the columns are common then I would use that description, in other cases then the information returned is purely for interpretation by the reader. Rather annoyingly both result sets have useful information that the other doesn’t. sp_who2 returns Disk IO and LastBatch information which is really useful but the SMO processes method give you IsSystem and MemUsage which have their place in fault diagnosis methods too. So which is better? On reflection I think I prefer to use the sp_who2 method primarily but knowing that the SMO Enumprocesses method is there when I need it is really useful and I’m sure I’ll use it regularly. I’m OK with the fact that it is the slower method because Measure-Command has shown me how close it is to the other option and that it really isn’t a large enough margin to matter.

    Read the article

  • Suggestions for getting an open source electron beam tracking code going

    - by Boaz
    I work in the field of accelerator physics and synchrotron radiation. High energy electrons circulating in large rings of magnets produce x-rays that are used for a variety of different kinds of science. Running and improving these facilities requires controlling and modelling the electron beam as it circulates in the ring. A code to model this basically requires trackers to follow the electrons through the elements (something called a symplectic integrator), and then the computation of different parameters associated with this motion. The problem with these codes is that every facility has there own. In principle the code is not so complex. And as a modelling project, one might think it has some general interest. Who doesn't want to be able to create a track in space out of magnets and watch the electrons circulate? There is a Matlab based code to do this called Accelerator Toolbox, but the creator of the code is no longer in the field. I put the code in Sourceforge under the name atcollab. The basic resource is in C- it is the set of symplectic integrators. These are available in the atcollab code here. It has been useful to put the code on Sourceforge in order to exchange code, but the community of users is quite small and most are too busy to put that much time into collaboration. So in terms of really improving the code, I don't think it has been so successful. Any piece of this picture could be recreated without that much difficulty, but overall it is a bit complex, and because each lab has their own installation with lots of add-on Matlab code, people find it hard to really work together and share code. Somehow I think we need to involve a wider community in our development, or just use some standard tools. But for that, I suppose it needs to be of some general interest. I think symplectic integrators may have some general interest. And the part about a plug-in architecture to build up the ring ought to fit other patterns. Or the other option is to just accept that this is not a problem of general interest, and work harder within our small community. Suggestions, or anecdotes of analogous experience would be appreciated.

    Read the article

  • Updates about Multidimensional vs Tabular #ssas #msbi

    - by Marco Russo (SQLBI)
    I recently read the blog post from James Serra Tabular model: Not ready for prime time? (read also the comments because there are discussions about a few points raised by James) and the following post from Christian Wade Multidimensional or Tabular. In the last 2 years I worked with many companies adopting Tabular in different scenarios and I agree with some of the points expressed by James in his post (especially about missing features in Tabular if compared to Multidimensional), but I strongly disagree in others. In general, Tabular is a good choice for a new project when: the development team does not have a good knowledge of Multidimensional and MDX (DAX is faster to learn, not so easy as it is sold by MS, but definitely easier than MDX) you don’t need calculations based on hierarchies (common in certain financial applications, but not so common as it could seem) there are important calculations based on distinct count measures there are complex calculations based on many-to-many relationships Until now, I never suggested to migrate an existing Multidimensional model to a Tabular one. There should be very important reasons for that, such as performance issues in distinct count and many-to-many relationships that cannot be easily solved by optimizing the Multidimensional model, but I still never encountered this scenario. I would say that in 80% of the new projects, you might use either Multidimensional or Tabular and the real difference is the time-to-market depending on the skills of the development team. So it’s not strange that who is used to Multidimensional is not moving to Tabular, not getting a particular benefit from the new model unless specific requirements exist. The recent DAXMD feature that allows using SharePoint Power View on Multidimensional is a really important one, even if I’d like having also Excel Power View enabled for this scenario (this should be just a question of time). Another scenario in which I’m seeing a growing adoption of Tabular is in companies that creates models for their product/service and do that by using XMLA or Tabular AMO 2012. I am used to call them ISVs, even if those providing services cannot be really defined in this way. These companies are facing the multitenancy challenge with Tabular and even if this is a niche market, I see some potential here, because adopting Tabular seems a much more natural choice than Multidimensional in those scenario where an analytical engine has to be embedded to deliver one of the features of a larger product/service delivered to customers. I’d like to see other feedbacks in the comments: tell your story of choosing between Tabular and Multidimensional in a BI project you started with SQL Server 2012, thanks!

    Read the article

  • Exception Handling And Other Contentious Political Topics

    - by Justin Jones
    So about three years ago, around the time of my last blog post, I promised a friend I would write this post. Keeping promises is a good thing, and this is my first step towards easing back into regular blogging. I fully expect him to return from Pennsylvania to buy me a beer over this. However, it’s been an… ahem… eventful three years or so, and blogging, unfortunately, got pushed to the back burner on my priority list, along with a few other career minded activities. Now that the personal drama of the past three years is more or less resolved, it’s time to put a few things back on the front burner. What I consider to be proper exception handling practices is relatively well known these days. There are plenty of blog posts out there already on this topic which more or less echo my opinions on this topic. I’ll try to include a few links at the bottom of the post. Several years ago I had an argument with a co-worker who posited that exceptions should be caught at every level and logged. This might seem like sanity on the surface, but the resulting error log looked something like this: Error: System.SomeException Followed by small stack trace. Error: System.SomeException Followed by slightly bigger stack trace. Error: System.SomeException Followed by slightly bigger stack trace. Error: System.SomeException Followed by slightly bigger stack trace. Error: System.SomeException Followed by slightly bigger stack trace. Error: System.SomeException Followed by slightly bigger stack trace. Error: System.SomeException Followed by slightly bigger stack trace. Error: System.SomeException Followed by slightly bigger stack trace.   These were all the same exception. The problem with this approach is that the error log, if you run any kind of analytics on in, becomes skewed depending on how far up the stack trace your exception was thrown. To mitigate this problem, we came up with the concept of the “PreLoggedException”. Basically, we would log the exception at the very top level and subsequently throw the exception back up the stack encapsulated in this pre-logged type, which our logging system knew to ignore. Now the error log looked like this: Error: System.SomeException Followed by small stack trace. Much cleaner, right? Well, there’s still a problem. When your exception happens in production and you go about trying to figure out what happened, you’ve lost more or less all context for where and how this exception was thrown, because all you really know is what method it was thrown in, but really nothing about who was calling the method or why. What gives you this clue is the entire stack trace, which we’re losing here. I believe that was further mitigated by having the logging system pull a system stack trace and add it to the log entry, but what you’re actually getting is the stack for how you got to the logging code. You’re still losing context about the actual error. Not to mention you’re executing a whole slew of catch blocks which are sloooooooowwwww……… In other words, we started with a bad idea and kept band-aiding it until it didn’t suck quite so bad. When I argued for not catching exceptions at every level but rather catching them following a certain set of rules, my co-worker warned me “do yourself a favor, never express that view in any future interviews.” I suppose this is my ultimate dismissal of that advice, but I’m not too worried. My approach for exception handling follows three basic rules: Only catch an exception if 1. You can do something about it. 2. You can add useful information to it. 3. You’re at an application boundary. Here’s what that means: 1. Only catch an exception if you can do something about it. We’ll start with a trivial example of a login system that uses a file. Please, never actually do this in production code, it’s just concocted example. So if our code goes to open a file and the file isn’t there, we get a FileNotFound exception. If the calling code doesn’t know what to do with this, it should bubble up. However, if we know how to create the file from scratch we can create the file and continue on our merry way. When you run into situations like this though, What should really run through your head is “How can I avoid handling an exception at all?” In this case, it’s a trivial matter to simply check for the existence of the file before trying to open it. If we detect that the file isn’t there, we can accomplish the same thing without having to handle in in a catch block. 2. Only catch an exception if you can do something about it. Continuing with the poorly thought out file based login system we contrived in part 1, if the code calls a Login(…) method and the FileNotFound exception is thrown higher up the stack, the code that calls Login must account for a FileNotFound exception. This is kind of counterintuitive because the calling code should not need to know the internals of the Login method, and the data file is an implementation detail. What makes more sense, assuming that we didn’t implement any of the good advice from step 1, is for Login to catch the FileNotFound exception and wrap it in a new exception. For argument’s sake we’ll say LoginSystemFailureException. (Sorry, couldn’t think of anything better at the moment.) This gives us two stack traces, preserving the original stack trace in the inner exception, and also is much more informative to the calling code. 3. Only catch an exception if you’re at an application boundary. At some point we have to catch all the exceptions, even the ones we don’t know what to do with. WinForms, ASP.Net, and most other UI technologies have some kind of built in mechanism for catching unhandled exceptions without fatally terminating the application. It’s still a good idea to somehow gracefully exit the application in this case if possible though, because you can no longer be sure what state your application is in, but nothing annoys a user more than an application just exploding. These unhandled exceptions need to be logged, and this is a good place to catch them. Ideally you never want this option to be exercised, but code as though it will be. When you log these exceptions, give them a “Fatal” status (e.g. Log4Net) and make sure these bugs get handled in your next release. That’s it in a nutshell. If you do it right each exception will only get logged once and with the largest stack trace possible which will make those 2am emergency severity 1 debugging sessions much shorter and less frustrating. Here’s a few people who also have interesting things to say on this topic:  http://blogs.msdn.com/b/ericlippert/archive/2008/09/10/vexing-exceptions.aspx http://www.codeproject.com/Articles/9538/Exception-Handling-Best-Practices-in-NET I know there’s more but I can’t find them at the moment.

    Read the article

  • "mcp power or thermal limit exceeded" excessive in messages.log

    - by Stefan Thyberg
    I keep getting "mcp power or thermal limit exceeded" every five seconds while my computer is on, regardless of what it's doing. I googled it and found some Intel patches but I don't really know exactly what they do or how to apply them and I also don't want to patch my kernel with these random bits of code from some newsgroup. Can anyone shed some light on what exactly is going on and what the right fix is in this case? Am I better off just waiting for a kernel patch?

    Read the article

  • How To Safely Eject Your USB Devices From the Desktop Context Menu

    - by Taylor Gibb
    If you are one of those people who don’t safely remove their USB Devices just because you’re lazy, here’s a neat trick to do it from the context menu on your desktop. Even if you are not lazy and just forget, the icon will serve as a mental reminder. So let’s take a look. How to Run Android Apps on Your Desktop the Easy Way HTG Explains: Do You Really Need to Defrag Your PC? Use Amazon’s Barcode Scanner to Easily Buy Anything from Your Phone

    Read the article

< Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >