Search Results

Search found 7865 results on 315 pages for 'high density'.

Page 5/315 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • High Jinks, Hi Jacks, Exceptional DBA Awards and PASS

    - by Rodney
    The countdown to PASS has counted down.  The day after tomorrow I will board a plane, like many others, on my way for the 4th year in a row to SQL PASS Summit.  The anticipation has been excruciating but luckily I have this little thing called a day job as a DBA that has kept me busy and not thinking too much about the event. Well that is not exactly true since my beautiful wife works for PASS so we get to talk about SQL from the time we wake up until late in the evening. I would not have it any other way and I feel very fortunate to be a part of this great event and to have been chosen as the Exceptional DBA Award judge also for the 4th year in a row.  This year, I will have been again tasked with presenting the award to the winner, Mr. Jeff Moden and it will be a true honor to meet him in person as I have read many of his articles on SSC and have attended his session at PASS previously.  The speech is all ready but one item remains, which will be a surprise to all who attend the party on Tuesday night in Seattle (see links below).  Let's face it, Exceptional DBAs everywhere work very hard protecting our data stores, tuning queries, mentoring, saving money, installing clusters, etc and once in a while there is time to be exceptionally non-professional and have a bit of fun. Once incident that happened this year that falls under the High Jinks category was when my network admin asked if I could Telnet into a SQL instance and see if I could make the connection through the firewall that he had just configured. I was able to establish a connection on port 1433 and it occurred to me that it would be very interesting if I could actually run T-SQL queries via a Telnet session much like you might do with an SMTP server. With that thought, I proceeded to demonstrate this could be possible by convincing my senior DBA Shawn McGehee that I was able to do so. At first he did not believe me. It shook his world view.  It was inconceivable.  What I had done, behind the scenes, of course, was to copy and rename SQLCMD.exe to Telnet.exe and used it to connect and run a simple, "Select * from sys.databases" on the SQL instance. I think if it had been anyone other than Shawn I could have extended this ruse indefinitely but he caught on within 30 seconds. It was a fun thirty seconds though. On the High Jacks side of the house, which is really merged to be SQL HACKS, I finally, after several years of struggling with how to connect to an untrusted domain like in a DMZ with a windows account in SSMS, I stumbled upon a solution that does away with the requirement to use SQL Authentication.  While "Runas" is a great command to use to run an application with a higher privileged account, I had not previously been able to figure out how to connect to the remote domain with SSMS and "Runsas". It never connected and caused a login failure every time for the remote windows domain account. Then I ran across an option for "Runas",   "/netonly".  This option postpones the login until a connection is made and only then passes the remote login you supply when you first launch SSMS with the "Runas" command. So a typical shortcut would look like: "C:\Windows\System32\runas.exe /netonly /user:remotedomain.com\rodlandrum "C:\Program Files\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\Ssms.exe" You will want to make sure the passwords are synced between the two domains, your local domain and the remote domain, otherwise you may have account lockout issues, but I have found in weeks of testing this is a stable solution. Now it is time to get ready to head for Seattle. Please, if you see me (@SQLBeat) or my wife (@Karlakay22) please run up and high five me (wait..High Jinks.High Jacks.High Fives.Need to change the title) or give me a big bear hug if you are strong enough to lift me off the ground. And if you do actually do that, I will think you are awesome and will not embarrass you by crying out for help or complaining of a broken back or sciatic nerve damage. And now the links to others who have all of the details. First, for the MVP Deep Dives 2, of which, like John, I was lucky enough to be able to participate in this year. http://www.simple-talk.com/community/blogs/johnm/archive/2011/09/29/103577.aspx And the details of the SSC party where the Exceptional DBA of 2011, Jeff Moden, will be awarded. http://www.simple-talk.com/community/blogs/rebecca_amos/archive/2011/10/05/103661.aspx   Cheers! Rodney

    Read the article

  • High Speed Photographs Capture Pellet Gun Destruction

    - by Jason Fitzpatrick
    What do you get when you combine high speed flash photography, a carefully focused camera, and pellet gun? Gloriously detailed pictures of pellets tearing apart fruit, cans, ceramic gnomes, and more. Alan Sailer has a passion; in his garage studio he photographs all manner of objects–bottles, raspberries, candy, soda cans–at the moment a pellet shot from a pellet gun tears them apart. The results are beautiful and reminescent of early high-speed photos by photography pioneer Edgerton Born. Hit up the link below to check out the collection and read more about his process. Alan Sailer’s High Speed Photographs [via FlavorWire] Make Your Own Windows 8 Start Button with Zero Memory Usage Reader Request: How To Repair Blurry Photos HTG Explains: What Can You Find in an Email Header?

    Read the article

  • AlwaysOn Architecture Guide: Building a High Availability and Disaster Recovery Solution by Using AlwaysOn Availability Groups

    SQL Server 2012 AlwaysOn Availability Groups provides a unified high availability and disaster recovery (HADR) solution that improves upon legacy functionality previously found across disparate features. Prior to SQL Server 2012, several customers used database mirroring to provide local high availability within a data center, and log shipping for disaster recovery across a remote data center. With SQL Server 2012, this common design pattern can be replaced with an architecture that uses availability groups for both high availability and disaster recovery. This paper details the key topology requirements of this specific design pattern, including quorum configuration considerations, steps required to build the environment, and a workflow that shows how to handle a disaster recovery event in the new topology.

    Read the article

  • High Availability documents for OBIA

    - by Lia Nowodworska - Oracle
    There are 2 white papers that have been created by Product Management and Advanced Resolution team (thanks Rajesh, Archana). These documents describe how to deploy a high-availability environment for the Weblogic components of BI Applications 11.1.1.8.1 and 11.1.1.7.1 including the Oracle Data Integrator. New  Configuring High Availability for Oracle Business Intelligence Applications Version 11.1.1.8.1 (Doc ID 1679319.1)  Updated Configuring High Availability for Oracle Business Intelligence Applications Version 11.1.1.7.1 (Doc ID 1587873.1)  When implementing OBIA please take some time to review one or both of the papers. Do you still have a quick question that you want an expert to have a look at: check in the OBIA Community:

    Read the article

  • What is the meaning of 'high cohesion'?

    - by Max
    I am a student who recently joined a software development company as an intern. Back at the university, one of my professors used to say that we have to strive to achieve "Low coupling and high cohesion". I understand the meaning of low coupling. It means to keep the code of separate components separately, so that a change in one place does not break the code in another. But what is meant by high cohesion. If it means integrating the various pieces of the same component well with each other, I dont understand how that becomes advantageous. What is meant by high cohesion? Can an example be explained to understand its benefits?

    Read the article

  • The Role of High Availability Computing on Business Continuity -- Part 1 of 2

    For organizations that can't afford, sustain or justify downtime -- developing, implementing and testing a high-availability computing strategy is essential. Unplanned downtime affects company reputation, stock price and competitive strategy. It can even delay IT innovation projects necessary for delivering new services to customers. LLearn how Oracle's approach to high availability computing is fundamentally different from the traditional model. Hear Oracle Thought Leader Balaji Bashyam (Vice President, Global Database Support) discuss high availability strategy, best practices, and the effects of availability on business, in a question and answer interview format. This podcast is presented in two parts and is intended for an audience of decision makers and influencers. Part 1 of 2

    Read the article

  • The Role of High Availability Computing on Business Continuity -- Part 2 of 2

    For organizations that can't afford, sustain or justify downtime -- developing, implementing and testing a high-availability computing strategy is essential. Unplanned downtime affects company reputation, stock price and competitive strategy. It can even delay IT innovation projects necessary for delivering new services to customers. Learn how Oracle's approach to high availability computing is fundamentally different from the traditional model. Hear Oracle Thought Leader Balaji Bashyam (Vice President, Global Database Support) discuss high availability strategy, best practices, and the effects of availability on business, in a question and answer interview format. This podcast is presented in two parts and is intended for an audience of decision makers and influencers.

    Read the article

  • Can a Double-Density Floppy drive be swapped out for a High-Density drive?

    - by Ben
    I'm trying to fix the floppy drive in my Roland MC-500. The floppy drive inside is a Matsushita DDF3-1 which doesn't seem to be in production anymore. The DDF3-2 is still available, however it is an HD drive. Does anyone know off the top of their heads if there is any issue with simply swapping the two drives? Could the power usage be different on the drives? Since the machine is expecting a 720k DD drive, are there any jumper settings to have the HD drive function as a DD drive?

    Read the article

  • Android multiple screen sizes with same density

    - by droidy
    I'm confused regarding the densities. I see that with medium density, the screen resolution could be either 320x480, 480x800, or 480x854. So if I have an image thats 300px wide in the mdpi folder, how is it going to look the same size on all 3 different screen sizes (mainly 320x480 vs the other 2)? And by look the same size, I mean scale to be bigger or smaller depending upon the screen size. Thanks.

    Read the article

  • AS3: limit objects to stage width?

    - by Gabriel Meono
    I want to limit the creation of objects acording to the stage width. My method is the following: for (var i:int = 0; i<7; i++){ If I put something like this, it won't work for (var i:int = 0; i<(stage.width); i++){ What I'm doing wrong? Full code: [SWF(width = 350, height = 600, frameRate = 60)] import com.actionsnippet.qbox.*; var sim:QuickBox2D = new QuickBox2D(this); sim.createStageWalls(); // make a heavy circle sim.addCircle({x:3, y:3, radius:0.4, density:1}); // create a few platforms // make 26 dominoes for (var i:int = 0; i<7; i++){ //End sim.addCircle({x:1 + i * 1.5, y:18, radius:0.1, density:0}); sim.addCircle({x:2 + i * 1.5, y:17, radius:0.1, density:0}); sim.addCircle({x:1 + i * 1.5, y:16, radius:0.1, density:0}); sim.addCircle({x:2 + i * 1.5, y:15, radius:0.1, density:0}); //Mid end sim.addCircle({x:0 + i * 2, y:14, radius:0.1, density:0}); sim.addCircle({x:0 + i * 2, y:13, radius:0.1, density:0}); sim.addCircle({x:0 + i * 2, y:12, radius:0.1, density:0}); sim.addCircle({x:0 + i * 2, y:11, radius:0.1, density:0}); sim.addCircle({x:0 + i * 2, y:10, radius:0.1, density:0}); } sim.start(); sim.mouseDrag();

    Read the article

  • AS3: StageWidth for BOX2D?

    - by Gabriel Meono
    I know BOX2D uses meters, and AS3 uses pixels. I'm trying to create objects which are limited to the stageWidth. If I do this variable: for (var i:int = 0; i<(stage.stageWidth); i++){...} The animation will freeze, and this output appears: TypeError: Error #1009: Cannot access a property or method of a null object reference. at Box2D.Collision::b2BroadPhase/CreateProxy() at Box2D.Collision.Shapes::b2Shape/CreateProxy() at Box2D.Dynamics::b2Body/CreateShape() at com.actionsnippet.qbox.objects::CircleObject/build() at com.actionsnippet.qbox::QuickObject/init() at com.actionsnippet.qbox::QuickObject() at com.actionsnippet.qbox.objects::CircleObject() at com.actionsnippet.qbox::QuickBox2D/create() at com.actionsnippet.qbox::QuickBox2D/addCircle() at BOX2D_Test_Tutorial_fla::MainTimeline/frame1() Does anyone know how to fix this? Full Code: [SWF(width = 350, height = 600, frameRate = 60)] import com.actionsnippet.qbox.*; var sim:QuickBox2D = new QuickBox2D(this); sim.createStageWalls(); // make a heavy circle sim.addCircle({x:3, y:3, radius:0.4, density:1}); // create a few platforms // make pins for (var i:int = 0; i<(stage.stageWidth); i++){ //End sim.addCircle({x:1 + i * 1.5, y:18, radius:0.1, density:0}); sim.addCircle({x:2 + i * 1.5, y:17, radius:0.1, density:0}); sim.addCircle({x:1 + i * 1.5, y:16, radius:0.1, density:0}); sim.addCircle({x:2 + i * 1.5, y:15, radius:0.1, density:0}); //Mid end sim.addCircle({x:0 + i * 2, y:14, radius:0.1, density:0}); sim.addCircle({x:0 + i * 2, y:13, radius:0.1, density:0}); sim.addCircle({x:0 + i * 2, y:12, radius:0.1, density:0}); sim.addCircle({x:0 + i * 2, y:11, radius:0.1, density:0}); sim.addCircle({x:0 + i * 2, y:10, radius:0.1, density:0}); } sim.start(); sim.mouseDrag();

    Read the article

  • It is Wise to Buy High PR Backlinks

    With competition extending into the virtual world, it is necessary to make sure that websites are linked to other websites of high PageRank. This is vital because, it could enjoy the airs of being on the top by mere association with another credible website. The internet domain is now extended and offers scope for websites of high quality to be on the top forever

    Read the article

  • 7 Things that High Availability is Not

    Wesley has heard High Availablity touted as all sorts of technological cure-all for busy SysAdmins and DBAs, and now he's taking a stand against it. There are a range of things that High Availability is regularly confused with (either deliberately or innocently), and Wesley's clearing it all up The Future of SQL Server Monitoring "Being web-based, SQL Monitor 2.0 enables you to check on your servers from almost any location" Jonathan Allen.Try SQL Monitor now.

    Read the article

  • C++ Programming: Better Accessibility with High DPI Support and MFC 10

    A number of factors are driving the requirement for applications to correctly support high DPI settings--increased monitor resolutions are making it more difficult for users to read text on the screen, compliance with disability access legislation is an increasingly important factor for corporations, and users are now expecting applications to behave well at higher DPI settings. MFC 10 and Visual C++ 2010 have built-in support for high DPI, making the development of a DPI-aware application quicker and more simple.

    Read the article

  • Research topics for starting and optimizing a high-traffic website

    - by user745434
    I bury a good deal of my ideas for fear that I don't know enough about scaling web applications and high-traffic websites. That said, I'd like to know of any general topics to research in order to ensure that your web app doesn't break / slow down when you start getting to Twitter-level traffic. I'm looking for research topics with, preferably, additional resources. For example: Make sure you optimize SQL queries (see High Performance MySQL Optimization)

    Read the article

  • 5 Easy Ways to Get High PR Links to Boost Your Site in Google

    High PR links are some of the most valuable aspects in any link-building & SEO campaign. Not only do these links make Google respect your site more, but they can also boost your site's ranking overnight. Here are 5 places to get high quality links that will do a lot of good to your site's ranking in Google.

    Read the article

  • Best solution for High Availability and SSRS on SQL Server 2008 R2?

    - by Chandra
    I have 2 Physical Servers with SQL Server 2008 R2. – SQL Server 1(Active) & SQL Server 2 (Passive) Web Application is developed using .Net 4.0 Framework. I want to know the best solution to have high availability and also have SSRS for reporting. Planned solution: Mirroring for Failover, and Transaction Replication for SSRS as the mirrored database can only be used for failover scenarios. SSRS will be on the Passive server, to reduce the load on the Active server. Let me know if the solution is correct. Also suggest alternate approaches.

    Read the article

  • Rendering large and high poly meshes

    - by Aurus
    Consider an huge terrain that has a lot polygons, to render this terrain I thought of following techniques: Using height-map instead of raw meshes: Yes, but I want to create a lot of caves and stuff that simply wont work with height-maps. Using voxels: Yes, but I think that this would be to much since I don't even want to support changing terrain.. Split into multiple chunks and do some sort of LOD with the mesh: Yes, but how would I do that? Tessellation usually creates more detail not less. Precompute the same mesh in lower poly version (like Mudbox does) and depending on the distance it renders one of these meshes: Graphic memory is limited and uploading only the chunks won't solve that problem since the traffic would be too high. IMO the last one sounds really good, but imagine the following process: Upload and render the chunks depending on the current player position. [No problem] Player will walk straight forward Now we maybe have to change on of the low poly chunk with the high poly one So, Remove the low poly chunk and load the high poly chunk [Already to much traffic here, I think] I am not very experienced in graphic programming and maybe the upper process is totally okay but somehow I think it is too much. And how about the disk space it would require.. I think 3 kind of levels would be fine but isn't that also too much? (I am using OpenGL but I don't think that this is important)

    Read the article

  • Should mock objects for tests be created at a high or low level

    - by Danack
    When creating unit tests for those other objects, what is the best way to create mock objects that provide data to other objects. Should they be created at a 'high level' and intercept the calls as soon as possible, or should they be done at a 'low level' and so make as much as the real code still be called? e.g. I'm writing a test for some code that requires a NoteMapper object that allows Notes to be loaded from the DB. class NoteMapper { function getNote($sqlQueryFactory, $noteID) { // Create an SQL query from $sqlQueryFactory // Run that SQL // if null // return null // else // return new Note($dataFromSQLQuery) } } I could either mock this object at a high level by creating a mock NoteMapper object, so that there are no calls to the SQL at all e.g. class MockNoteMapper { function getNote($sqlQueryFactory, $noteID) { //$mockData = {'Test Note title', "Test note text" } // return new Note($mockData); } } Or I could do it at a very low level, by creating a MockSQLQueryFactory that instead of actually querying the database just provides mock data back, and passing that to the current NoteMapper object. It seems that creating mocks at a high level would be easier in the short term, but that in the long term doing it at a low level would be more powerful and possibly allow more automation of tests e.g. by recording data in an out of a DB and then replaying that data for tests. Is there a recommended way of creating mocks? Are there any hard and fast rules about which are better, or should they both be used where appropriate?

    Read the article

  • Delphi Phrase Count / Keyword Density

    - by Brad
    Does anyone know how to or have some code on counting the number of unique phrases in a document? (Single word, two word phrases, three word phrases). Thanks Example of what I'm looking for: What I mean is I have a text document, and i need to see what the most popular word phrases are. Example text I took the car to the car wash. I : 1 took : 1 the : 2 car: 2 to : 1 wash : 1 I took : 1 took the : 1 the car : 2 car to : 1 to the : 1 car wash : 1 I took the : 1 took the car : 1 the car to : 1 car to the : 1 to the car : 1 the car wash : 1 I took the car to : 1 took the car to the : 1 the car to the car : 1 car to the car wash : 1 I need the phrase, and the count that it shows up. Any help would be appreciated. The closet thing I found to this was a PHP script from http://tools.seobook.com/general/keyword-density/source.php I used to have some code for this, but I cannot find it.

    Read the article

  • Looking for a recommendation on measuring a high availability app that is using a CDN.

    - by T Reddy
    I work for a Fortune 500 company that struggles with accurately measuring performance and availability for high availability applications (i.e., apps that are up 99.5% with 5 seconds page to page navigation). We factor in both scheduled and unscheduled downtime to determine this availability number. However, we recently added a CDN into the mix, which kind of complicates our metrics a bit. The CDN now handles about 75% of our traffic, while sending the remainder to our own servers. We attempt to measure what we call a "true user experience" (i.e., our testing scripts emulate a typical user clicking through the application.) These monitoring scripts sit outside of our network, which means we're hitting the CDN about 75% of the time. Management has decided that we take the worst case scenario to measure availability. So if our origin servers are having problems, but yet the CDN is serving content just fine, we still take a hit on availability. The same is true the other way around. My thought is that as long as the "user experience" is successful, we should not unnecessarily punish ourselves. After all, a CDN is there to improve performance and availability! I'm just wondering if anyone has any knowledge of how other Fortune 500 companies calculate their availability numbers? I look at apple.com, for instance, of a storefront that uses a CDN that never seems to be down (unless there is about to be a major product announcement.) It would be great to have some hard, factual data because I don't believe that we need to unnecessarily hurt ourselves on these metrics. We are making business decisions based on these numbers. I can say, however, given that these metrics are visible to management, issues get addressed and resolved pretty fast (read: we cut through the red-tape pretty quick.) Unfortunately, as a developer, I don't want management to think that the application is up or down because some external factor (i.e., CDN) is influencing the numbers. Thoughts? (I mistakenly posted this question on StackOverflow, sorry in advance for the cross-post)

    Read the article

  • iPhone OS: Strategies for high density image work

    - by Jasconius
    I have a project that is coming around the bend this summer that is going to involve, potentially, an extremely high volume of image data for display. We are talking hundreds of 640x480-ish images in a given application session (scaled to a smaller resolution when displayed), and handfuls of very large (1280x1024 or higher) images at a time. I've already done some preliminary work and I've found that the typical 640x480ish image is just a shade under 1MB in memory when placed into a UIImageView and displayed... but the very large images can be a whopping 5+ MB's in some cases. This project is actually be targeted at the iPad, which, in my Instruments tests seems to cap out at about 80-100MB's of addressable physical memory. Details aside, I need to start thinking of how to move huge volumes of image data between virtual and physical memory while preserving the fluidity and responsiveness of the application, which will be high visibility. I'm probably on the higher ends of intermediate at Objective-C... so I am looking for some solid articles and advice on the following: 1) Responsible management of UIImage and UIImageView in the name of conserving physical RAM 2) Merits of using CGImage over UIImage, particularly for the huge images, and if there will be any performance gain 3) Anything dealing with memory paging particularly as it pertains to images I will epilogue by saying that the numbers I have above maybe off by about 10 or 15%. Images may or may not end up being bundled into the actual app itself as opposed to being loaded in from an external server.

    Read the article

  • SQL SERVER – SQL Server High Availability Options – Notes from the Field #032

    - by Pinal Dave
    [Notes from Pinal]: When it is about High Availability or Disaster Recovery, I often see people getting confused. There are so many options available that when the user has to select what is the most optimal solution for their organization they are often confused. Most of the people even know the salient features of various options, but when they have to figure out one single option to use they are often not sure which option to use. I like to give ask my dear friend time all these kinds of complicated questions. He has a skill to make a complex subject very simple and easy to understand. Linchpin People are database coaches and wellness experts for a data driven world. In this 26th episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) explains in a very simple words the best High Availability Option for your SQL Server.  Working with SQL Server a common challenge we are faced with is providing the maximum uptime possible.  To meet these demands we have to design a solution to provide High Availability (HA). Microsoft SQL Server depending on your edition provides you with several options.  This could be database mirroring, log shipping, failover clusters, availability groups or replication. Each possible solution comes with pro’s and con’s.  Not anyone one solution fits all scenarios so understanding which solution meets which need is important.  As with anything IT related, you need to fully understand your requirements before trying to solution the problem.  When it comes to building an HA solution, you need to understand the risk your organization needs to mitigate the most. I have found that most are concerned about hardware failure and OS failures. Other common concerns are data corruption or storage issues.  For data corruption or storage issues you can mitigate those concerns by having a second copy of the databases. That can be accomplished with database mirroring, log shipping, replication or availability groups with a secondary replica.  Failover clustering and virtualization with shared storage do not provide redundancy of the data. I recently created a chart outlining some pros and cons of each of the technologies that I posted on my blog. I like to use this chart to help illustrate how each technology provides a certain number of benefits.  Each of these solutions carries with it some level of cost and complexity.  As a database professional we should all be familiar with these technologies so we can make the best possible choice for our organization. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Note: Tim has also written an excellent book on SQL Backup and Recovery, a must have for everyone. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Shrinking Database

    Read the article

  • High CPU load - Ubuntu 14.04

    - by watt
    I noticed that sometimes when browsing (with other processes in the background), I get very high CPU load for the browser process (over 100%) and the computer becomes really slow. I tried switching from Firefox (with just a few extensions) to Chromium, but same thing happens without me visiting graphics-intense sites, flash sites or anything like that. I also noticed python or node (when running "make") produce the same high CPU load from time to time so this is not necessarily browser-related. When I only have a browser open, it doesn't seem to happen and everything is fine in Windows 7. I switched from unity to gnome3 with no effect. Specs: lenovo w510 (4gb RAM, i7 q820 @ 1.73) + up to date Ubuntu 14.04 64bit. Printscreen: http://imgur.com/8MZJNKC Do you guys have any idea why this might happen? Please let me know if there's other info you need. Thanks!

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >