Search Results

Search found 8715 results on 349 pages for 'bad sectors'.

Page 80/349 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • SQL SERVER – Automation Process Good or Ugly

    - by pinaldave
    This blog post is written in response to T-SQL Tuesday hosted by SQL Server Insane Asylum. The idea of this post really caught my attention. Automation – something getting itself done after the initial programming, is my understanding of the subject. The very next thought was – is it good or evil? The reality is there is no right answer. However, what if we quickly note a few things, then I would like to request your help to complete this post. We will start with the positive parts in SQL Server where automation happens. The Good If I start thinking of SQL Server and Automation the very first thing that comes to my mind is SQL Agent, which runs various jobs. Once I configure any task or job, it runs fine (till something goes wrong!). Well, automation has its own advantages. We all have used SQL Agent for so many things – backup, various validation jobs, maintenance jobs and numerous other things. What other kinds of automation tasks do you run in your database server? The Ugly This part is very interesting, because it can get really ugly(!). During my career I have found so many bad automation agent jobs. Client had an agent job where he was dropping the clean buffers every hour Client using database mail to send regular emails instead of necessary alert related emails The best one – A client used new Missing Index and Unused Index scripts in SQL Agent Job to follow suggestions 100%. Believe me, I have never seen such a badly performing and hard to optimize database. (I ended up dropping all non-clustered indexes on the development server and ran production workload on the development server again, then configured with optimal indexes). Shrinking database is performance killer. It should never be automated. SQL SERVER – Shrinking Database is Bad – Increases Fragmentation – Reduces Performance The one I hate the most is AutoShrink Database. It has given me hard time in my career quite a few times. SQL SERVER – SHRINKDATABASE For Every Database in the SQL Server Automation is necessary but common sense is a must when creating automation. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • panic! apache2 cannot start due to unable to read error.log !

    - by vvvvvvv
    apache2 fails to start because it cannot open error.log I've checked the syslog, and found something troubling... Jan 20 02:58:18 unassigned sm-mta[3559]: o0FAD04C017861: to=<[email protected]>, delay=4+22:45:18, xdelay=00:06:18, mailer=esmtp, pri=63390823, relay=asdfa$ Jan 20 03:00:01 unassigned /USR/SBIN/CRON[3939]: (root) CMD (if [ -x /usr/bin/vnstat ] && [ `ls /var/lib/vnstat/ | wc -l` -ge 1 ]; then /usr/bin/vnstat -u; f$ Jan 20 03:00:01 unassigned /USR/SBIN/CRON[3944]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jan 20 03:00:01 unassigned /USR/SBIN/CRON[3949]: (www-data) CMD ([ -x /usr/lib/cgi-bin/awstats.pl -a -f /etc/awstats/awstats.conf -a -r /var/log/apache/acces$ Jan 20 03:02:48 unassigned kernel: [371919.642705] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 Jan 20 03:02:48 unassigned kernel: [371919.642754] ata3.00: BMDMA stat 0x24 Jan 20 03:02:48 unassigned kernel: [371919.642779] ata3.00: cmd 25/00:08:37:59:e2/00:00:42:00:00/e0 tag 0 dma 4096 in Jan 20 03:02:48 unassigned kernel: [371919.642780] res 51/01:00:37:59:e2/01:00:42:00:00/e0 Emask 0x1 (device error) Jan 20 03:02:48 unassigned kernel: [371919.642824] ata3.00: status: { DRDY ERR } Jan 20 03:02:48 unassigned kernel: [371919.657647] ata3.00: configured for UDMA/133 Jan 20 03:02:48 unassigned kernel: [371919.657661] ata3: EH complete Jan 20 03:02:50 unassigned kernel: [371921.857580] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 Jan 20 03:02:50 unassigned kernel: [371921.857620] ata3.00: BMDMA stat 0x24 Jan 20 03:02:50 unassigned kernel: [371921.857645] ata3.00: cmd 25/00:08:37:59:e2/00:00:42:00:00/e0 tag 0 dma 4096 in Jan 20 03:02:50 unassigned kernel: [371921.857646] res 51/01:00:37:59:e2/01:00:42:00:00/e0 Emask 0x1 (device error) Jan 20 03:02:50 unassigned kernel: [371921.857688] ata3.00: status: { DRDY ERR } Jan 20 03:02:50 unassigned kernel: [371921.881468] ata3.00: configured for UDMA/133 Jan 20 03:02:54 unassigned kernel: [371921.881479] ata3: EH complete Jan 20 03:02:54 unassigned kernel: [371924.081382] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 Jan 20 03:02:54 unassigned kernel: [371924.081417] ata3.00: BMDMA stat 0x24 Jan 20 03:02:54 unassigned kernel: [371924.081443] ata3.00: cmd 25/00:08:37:59:e2/00:00:42:00:00/e0 tag 0 dma 4096 in Jan 20 03:02:54 unassigned kernel: [371924.081444] res 51/01:00:37:59:e2/01:00:42:00:00/e0 Emask 0x1 (device error) Jan 20 03:02:54 unassigned kernel: [371924.081487] ata3.00: status: { DRDY ERR } Jan 20 03:02:54 unassigned kernel: [371924.105252] ata3.00: configured for UDMA/133 Jan 20 03:02:54 unassigned kernel: [371924.105261] ata3: EH complete Jan 20 03:02:54 unassigned kernel: [371933.656925] sd 2:0:0:0: [sda] 1465149168 512-byte hardware sectors (750156 MB) Jan 20 03:02:54 unassigned kernel: [371933.656941] sd 2:0:0:0: [sda] Write Protect is off Jan 20 03:02:54 unassigned kernel: [371933.656944] sd 2:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jan 20 03:02:54 unassigned kernel: [371933.656956] sd 2:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 20 03:02:54 unassigned kernel: [371933.656972] sd 2:0:0:0: [sda] 1465149168 512-byte hardware sectors (750156 MB) Jan 20 03:02:54 unassigned kernel: [371933.656979] sd 2:0:0:0: [sda] Write Protect is off Jan 20 03:02:54 unassigned kernel: [371933.656982] sd 2:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jan 20 03:02:54 unassigned kernel: [371933.656993] sd 2:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 20 03:03:34 unassigned kernel: [371966.060069] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 Jan 20 03:05:48 unassigned kernel: [371971.776846] ata3.00: BMDMA stat 0x24 Jan 20 03:05:48 unassigned kernel: [371971.776871] ata3.00: cmd 25/00:18:87:10:ee/00:00:42:00:00/e0 tag 0 dma 12288 in Jan 20 03:05:48 unassigned kernel: [371971.776872] res 51/01:00:87:10:ee/01:00:42:00:00/e0 Emask 0x1 (device error) Jan 20 03:05:48 unassigned kernel: [371971.776914] ata3.00: status: { DRDY ERR } Jan 20 03:05:48 unassigned kernel: [371971.800668] ata3.00: configured for UDMA/133 Jan 20 03:05:48 unassigned kernel: [371971.800687] ata3: EH complete Jan 20 03:05:48 unassigned kernel: [371974.157850] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 Jan 20 03:05:48 unassigned kernel: [371974.157885] ata3.00: BMDMA stat 0x24 Jan 20 03:05:48 unassigned kernel: [371974.157911] ata3.00: cmd 25/00:18:87:10:ee/00:00:42:00:00/e0 tag 0 dma 12288 in Jan 20 03:05:48 unassigned kernel: [371974.157912] res 51/01:00:88:10:ee/01:00:42:00:00/e0 Emask 0x1 (device error) Jan 20 03:05:48 unassigned kernel: [371974.157956] ata3.00: status: { DRDY ERR } Jan 20 03:05:48 unassigned kernel: [371974.179773] ata3.00: configured for UDMA/133 Jan 20 03:05:48 unassigned kernel: [371974.179786] ata3: EH complete Jan 20 03:05:48 unassigned kernel: [371976.398570] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 Jan 20 03:05:48 unassigned kernel: [371976.398610] ata3.00: BMDMA stat 0x24 Jan 20 03:05:48 unassigned kernel: [371976.398635] ata3.00: cmd 25/00:18:87:10:ee/00:00:42:00:00/e0 tag 0 dma 12288 in Jan 20 03:05:48 unassigned kernel: [371976.398636] res 51/01:00:88:10:ee/01:00:42:00:00/e0 Emask 0x1 (device error) Jan 20 03:05:48 unassigned kernel: [371976.398678] ata3.00: status: { DRDY ERR } Jan 20 03:05:48 unassigned kernel: [371976.423477] ata3.00: configured for UDMA/133 Jan 20 03:05:48 unassigned kernel: [371976.423495] sd 2:0:0:0: [sda] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE,SUGGEST_OK Jan 20 03:05:48 unassigned kernel: [371976.423498] sd 2:0:0:0: [sda] Sense Key : Medium Error [current] [descriptor] Jan 20 03:05:48 unassigned kernel: [371976.423501] Descriptor sense data with sense descriptors (in hex): Jan 20 03:05:48 unassigned kernel: [371976.423503] 72 03 13 00 00 00 00 0c 00 0a 80 00 00 00 00 00 Jan 20 03:05:48 unassigned kernel: [371976.423508] 42 ee 10 88 Jan 20 03:05:48 unassigned kernel: [371976.423510] sd 2:0:0:0: [sda] Add. Sense: Address mark not found for data field Jan 20 03:05:48 unassigned kernel: [371976.423515] end_request: I/O error, dev sda, sector 1122898056 Jan 20 03:05:48 unassigned kernel: [371976.423536] ata3: EH complete Jan 20 03:05:48 unassigned kernel: [371978.630504] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 Jan 20 03:05:48 unassigned kernel: [371978.630547] ata3.00: BMDMA stat 0x24

    Read the article

  • How to remove raid 5 array of 5 SAS and use only use one SAS at a time without writing any thing to disc for recovery

    - by murtaza hamid
    I have HP server ML370 g5 with 8 SAS, c drive 1 72 gb raid 0, d drive 2 72 gb raid 0, f drive 5 146 gb raid 5. 2 of 5 sas drive has got bad sectors and raid 5 is showing status failed. now i want to remove all this 5 SAS and put 1 by 1 in any of the bay to make its image for data recovery purpose without writing anything to the drive. how should i proceed. i also want to keep drive c and d intact. also is it possible if i put all this 5 drives in the bay with the same sequense will it recognise the raid 5 array ( i read some where its smart controller..just curious) many thanks in advance.

    Read the article

  • Can't Remove Logical Drive/Array from HP P400

    - by Myles
    This is my first post here. Thank you in advance for any assistance with this matter. I'm trying to remove a logical drive (logical drive 2) and an array (array "B") from my Smart Array P400. The host is a DL580 G5 running 64-bit Red Hat Enterprise Linux Server release 5.7 (Tikanga). I am unable to remove the array using either hpacucli or cpqacuxe. I believe it is because of "OS Status: LOCKED". The file system that lives on this array has been unmounted. I do not want to reboot the host. Is there some way to "release" this logical drive so I can remove the array? Note that I do not need to preserve the data on logical drive 2. I intend to physically remove the drives from the machine and replace them with larger drives. I'm using the cciss kernel module that ships with Red Hat 5.7. Here is some information pertaining to the host and the P400 configuration: [root@gort ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 5.7 (Tikanga) [root@gort ~]# uname -a Linux gort 2.6.18-274.el5 #1 SMP Fri Jul 8 17:36:59 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux [root@gort ~]# rpm -qa | egrep '^(hp|cpq)' cpqacuxe-9.30-15.0 hp-health-9.25-1551.7.rhel5 hpsmh-7.1.2-3 hpdiags-9.3.0-466 hponcfg-3.1.0-0 hp-snmp-agents-9.25-2384.8.rhel5 hpacucli-9.30-15.0 [root@gort ~]# hpacucli HP Array Configuration Utility CLI 9.30.15.0 Detecting Controllers...Done. Type "help" for a list of supported commands. Type "exit" to close the console. => ctrl all show config detail Smart Array P400 in Slot 0 (Embedded) Bus Interface: PCI Slot: 0 Cache Serial Number: PA82C0J9SVW34U RAID 6 (ADG) Status: Enabled Controller Status: OK Hardware Revision: D Firmware Version: 7.22 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 15 secs Surface Scan Mode: Idle Wait for Cache Room: Disabled Surface Analysis Inconsistency Notification: Disabled Post Prompt Timeout: 0 secs Cache Board Present: True Cache Status: OK Cache Ratio: 25% Read / 75% Write Drive Write Cache: Disabled Total Cache Size: 256 MB Total Cache Memory Available: 208 MB No-Battery Write Cache: Disabled Cache Backup Power Source: Batteries Battery/Capacitor Count: 1 Battery/Capacitor Status: OK SATA NCQ Supported: True Logical Drive: 1 Size: 136.7 GB Fault Tolerance: RAID 1 Heads: 255 Sectors Per Track: 32 Cylinders: 35132 Strip Size: 128 KB Full Stripe Size: 128 KB Status: OK Caching: Enabled Unique Identifier: 600508B100184A395356573334550002 Disk Name: /dev/cciss/c0d0 Mount Points: /boot 101 MB, /tmp 7.8 GB, /usr 3.9 GB, /usr/local 2.0 GB, /var 3.9 GB, / 2.0 GB, /local 113.2 GB OS Status: LOCKED Logical Drive Label: A0027AA78DEE Mirror Group 0: physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 146 GB, OK) Mirror Group 1: physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 146 GB, OK) Drive Type: Data Array: A Interface Type: SAS Unused Space: 0 MB Status: OK Array Type: Data physicaldrive 1I:1:1 Port: 1I Box: 1 Bay: 1 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM57RF40000983878FX Model: HP DG146BB976 Current Temperature (C): 29 Maximum Temperature (C): 35 PHY Count: 2 PHY Transfer Rate: Unknown, Unknown physicaldrive 1I:1:2 Port: 1I Box: 1 Bay: 2 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM55VQC000098388524 Model: HP DG146BB976 Current Temperature (C): 29 Maximum Temperature (C): 36 PHY Count: 2 PHY Transfer Rate: Unknown, Unknown Logical Drive: 2 Size: 546.8 GB Fault Tolerance: RAID 5 Heads: 255 Sectors Per Track: 32 Cylinders: 65535 Strip Size: 64 KB Full Stripe Size: 256 KB Status: OK Caching: Enabled Parity Initialization Status: Initialization Completed Unique Identifier: 600508B100184A395356573334550003 Disk Name: /dev/cciss/c0d1 Mount Points: None OS Status: LOCKED Logical Drive Label: A5C9C6F81504 Drive Type: Data Array: B Interface Type: SAS Unused Space: 0 MB Status: OK Array Type: Data physicaldrive 1I:1:3 Port: 1I Box: 1 Bay: 3 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM2H5PE00009802NK19 Model: HP DG146ABAB4 Current Temperature (C): 30 Maximum Temperature (C): 37 PHY Count: 1 PHY Transfer Rate: Unknown physicaldrive 1I:1:4 Port: 1I Box: 1 Bay: 4 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM28YY400009750MKPJ Model: HP DG146ABAB4 Current Temperature (C): 31 Maximum Temperature (C): 36 PHY Count: 1 PHY Transfer Rate: 3.0Gbps physicaldrive 2I:1:5 Port: 2I Box: 1 Bay: 5 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM2FGYV00009802N3GN Model: HP DG146ABAB4 Current Temperature (C): 30 Maximum Temperature (C): 38 PHY Count: 1 PHY Transfer Rate: Unknown physicaldrive 2I:1:6 Port: 2I Box: 1 Bay: 6 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM8AFAK00009920MMV1 Model: HP DG146BB976 Current Temperature (C): 31 Maximum Temperature (C): 41 PHY Count: 2 PHY Transfer Rate: Unknown, Unknown physicaldrive 2I:1:7 Port: 2I Box: 1 Bay: 7 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM2FJQD00009801MSHQ Model: HP DG146ABAB4 Current Temperature (C): 29 Maximum Temperature (C): 39 PHY Count: 1 PHY Transfer Rate: Unknown

    Read the article

  • Explanation needed, for “Ask, don't tell” approach?

    - by the_naive
    I'm taking a course on design patterns in software engineering and here I'm trying to understand the good and the bad way of design relating to "coupling" and "cohesion". I could not understand the concept described in the following image. The example of code shown in the image is ambiguous to me, so I can't quite clearly get what exactly "Ask, don't tell!" approach mean. Could you please explain?

    Read the article

  • 2D Mask antialiasing in xna hlsl

    - by mohsen
    I have two texture2d , one of these is a mask texture and have 2kind color and i use that for mask (filter) second texture2D something like float4 tex = tex2D(sprite, texCoord); float4 bitMask = tex2D(mask, texCoord); if (bitMask.a >0) { return float4(0,0,0,0); } else { return float4(tex.b,tex.g,tex.r,1); } but because mask texture is just two color the result is too jagged i want know how i can do some antialiasing for edges that smooth these ty for reading and sry for my bad english

    Read the article

  • Does HTML 5 &ldquo;Rich vs. Reach&rdquo; a False Choice?

    - by andrewbrust
    The competition between the Web and proprietary rich platforms, including Windows, Mac OS, iPhone/iPad, Adobe’s Flash/AIR and Microsoft’s Silverlight, is not new. But with the emergence of HTML 5 and imminent support for it in the next release of the major Web browsers, the battle is heating up. And with the announcements made Wednesday at Google's I/O conference, it's getting kicked up yet another notch. The impact of this platform battle on companies in the media and advertising world, and the developers who serve them, is significant. The most prominent question is whether video and rich media online will shift towards pure HTML and away from plug-ins like Flash and Silverlight. In fact, certain features in HTML 5 make it suitable for development for line of business applications as well, further threatening those plug-in technologies. So what's the deal? Is this real or hype? To answer that question, I've done my own research into HTML 5's features and talked to several media-focused, New York area developers to get their opinions. I present my findings to you in this post. Before bearing down into HTML 5 specifics and practitioners’ quotes, let's set the context. To understand what HTML 5 can do, take a look at this video of Sports Illustrated’s HTML 5 prototype. This should start to get you bought into the idea that HTML 5 could be a game-changer. Next, if you happen to have installed the beta version of Google's Chrome 5 browser, take a look at the page linked to below, and in that page, click on any of the game thumbnails to see what's possible, without a plug-in, in this brave new world. (Note, although the instructions for each game tell you to press the A key to start, press the Z key instead.). Here's the link: http://www.kesiev.com/akihabara As an adjunct to what's enabled by HTML 5, consider the various transforms that are part of CSS 3. If you're running Safari as your browser, the following link will showcase this live; if not, you'll see a bitmap that will give you an idea of what's possible: http://webkit.org/blog/386/3d-transforms Are you starting to get the picture (literally)? What has up until now required browser plug-ins and other patches to HTML, most typically Flash, will soon be renderable, natively, in all major browsers. Moreover, it's looking likely that developers will be able to deliver such content and experiences in these browsers using one base of markup and script code (using straight JavaScript and/or jQuery), without resorting to browser-specific code and workarounds. If you're skeptical of this, I wouldn't blame you, especially with respect to Microsoft's Internet Explorer. However, i can tell you with confidence that even Microsoft is dedicated to full-on HTML 5 support in version 9 of that browser, which is currently under development. So what’s new in HTML 5, specifically, that makes sites like this possible?  The specification documents go into deep detail, and there’s no sense in rehashing them here, but a summary is probably in order.   Here is a non-authoritative, but useful, list of the major new feature areas in HTML 5: 2D drawing capabilities and 3D transforms. 2D drawing instructions can be embedded statically into a Web page; application interactivity and animation can be achieved through script.  As mentioned above, 3D transforms are technically part of version 3 of the CSS (Cascading Style Sheets) spec, rather than HTML 5, but they can nonetheless be thought of as part of the bundle.  They allow for rendering of 3D images and animations that, together with 2D drawing, make HTML-based games much more feasible than they are presently, as the links above demonstrate. Embedded audio and video. A media player can appear directly in a rendered Web page, using HTML markup and no plug-ins. Alternately, player controls can be hidden and the content can play automatically. Major enhancements to form-based input. This includes such things as specification of required fields, embedding of text “hints” into a control, limiting valid input on a field to dates, email addresses or a list of values.  There’s more to this, but the gist is that line-of-business applications, with complicated input and data validation, are supported directly Offline caching, local storage and client-side SQL database. These facilities allow Web applications to function more like native apps, even if no internet connection is available. User-defined data. Data (or metadata – data about data) can easily be embedded statically and/or retrieved and updated with Javascript code. This avoids having to embed that data in a separate file, or within script code. Taken together, these features position HTML to compete with, and perhaps overtake, Adobe’s Flash/AIR (and Microsoft’s Silverlight) as a viable Web platform for media, RIAs (rich internet applications – apps that function more like desktop software than Web sites) and interactive Web content, including games. What do players in the media world think about this?  From the embedded video above, we know what Sports Illustrated (and, therefore, Time Warner) think.  Hulu, the major Internet site for broadcast TV content, is on record as saying HTML5 video does not pass muster with them, at least not yet.  YouTube, on the other hand, already has an experimental HTML 5-based version of their site.  TechCrunch has reported that NetFlix is flirting with HTML 5 too, especially as it pertains to embedded browsers in TV-based devices.  And the New York Times’ Web site now embeds some video clips without resorting to Flash.  They have to – otherwise iPhone, iPod Touch and iPad users couldn’t see them in the Mobile Safari browser. What do media-focused developers think about all this?  I talked to several to get their opinions. Michael Pinto is CEO and Founder of Very Memorable Design whose primary focus has been to help marketing directors get traction online.  The firm’s client roster includes the likes Time, Inc., Scholastic and PBS.  Pinto predicts that “More and more microsites that were done entirely in Flash will be done more and more using jQuery. I can also see slideshows and video now being done without Flash. However if you needed to create a game or highly interactive activity Flash would still be the way to go for the web.” A dissenting view comes from Jesse Erlbaum, CEO of The Erlbaum Group, LLC, which serves numerous clients in the magazine publishing sector.  When I asked Erlbaum whether he thought HTML 5 and jQuery/JavaScript would steal significant market share from Flash, he responded “Not at all!  In particular, not for media and advertising customers!  These sectors are not generally in the business of making highly functional applications, which is the one place where HTML5/jQuery/etc really shines.” Ironically, Pinto’s firm is a heavy user of Flash for its projects and Erlbaum’s develops atop the “LAMP” (Linux, Apache, MySQL and PHP/Perl) stack.  For whatever reason, each firm seems to see the other’s toolset as a more viable choice.  But both agree that the developer tool story around HTML 5 is deficient.  Pinto explains “What’s lost with [HTML 5 and Javascript] techniques is that there isn’t a single widely favored easy-to-use tool of choice for authoring. So with Flash you can get up and running right away and not worry about what is different from one browser to the next.“  Erlbaum agrees, saying: “HTML5/Javascript lacks a sophisticated integrated development environment (IDE) which is an essential part of Flash.  If what someone is trying to make is primarily animation, it's a waste of time…to do this in Javascript.  It can be done much more easily in Flash, and with greater cross-browser compatibility and consistency due to the ubiquity of Flash.” Adobe (maker of Flash since its 2005 acquisition of Macromedia) likely agrees.  And for better or worse, they’ve decided to address this shortcoming of HTML 5, even at risk of diminishing their Flash platfrom. Yesterday Adobe announced that their hugely popular Deamweaver Web design authoring tool would directly support HTML 5 and CSS 3 development.  In fact, the Adobe Dreamweaver CS5 HTML5 Pack is downloadable now from Adobe Labs. Maybe Adobe is bowing to pressure from ardent Web professionals like Scott Kellum, Lead Designer at Channel V Media,  a digital and offline branding firm, serving the media and marketing sectors, among others.  Kellum told me that HTML 5 “…will definitely move people away from Flash. It has many of the same functionalities with faster load times and better accessibility. HTML5 will help Flash as well: with the new caching methods you can now even run Flash apps offline.” Although all three Web developers I interviewed would agree that Flash is still required for more sophisticated applications, Kellum seems to have put his finger on why HTML 5 may nonetheless dominate.  In his view, much of the Web development out there has little need for high-end capabilities: “Most people want to add a little punch to a navigation bar or some video and now you can get the biggest bang for your buck with HTML5, CSS3 and Javascript.” I’ve already mentioned that Google’s ongoing I/O conference, at the Moscone West center in San Francisco, is driving the HTML 5 news cycle, big time.  And Google made many announcements of their own, including the open sourcing of their VP8 video codec, new enterprise-oriented capabilities for its App Engine cloud offering, and the creation of the Chrome Web Store, which the company says will make it easier to find and “install” Web applications, in a fashion similar to  the way users procure native apps on various mobile platforms. HTML 5 looks to be disruptive, especially to the media world.  And even if the technology ends up disappointing, the chatter around it alone is causing big changes in the technology world.  If the richness it promises delivers, then magazine publishers and non-text digital advertisers may indeed have a platform for creating compelling content that loads quickly, is standards-based and will render identically in (the newest versions of) all major Web browsers.  Can this development in the digital arena save the titans of the print world?  I can’t predict, but it’s going to be fun to watch, and the competitive innovation from all players in both industries will likely be immense.

    Read the article

  • The Patent War of All Against All

    <b>Brendan Scott&#8217;s Weblog:</b> "Glyn notes that the software is being licensed under a BSD licence and notes that is a good thing, but then observes that there is a patent encumbrance on the code, and indicates this is a bad thing."

    Read the article

  • Declaration of Email Signatures [Video]

    - by Jason Fitzpatrick
    In honor of the Fourth of July and as a public service to highlight bad email signature practices, College Humor shares a peek at what the Declaration of Independence would look like if Founding Fathers shared our modern sensibilities about email signatures. Declaration of Email Signatures [College Humor] Download the Official How-To Geek Trivia App for Windows 8 How to Banish Duplicate Photos with VisiPic How to Make Your Laptop Choose a Wired Connection Instead of Wireless

    Read the article

  • Intel Xeon 5600 (Westmere-EP) and 7500 (Nehalem-EX)

    - by jchang
    Intel Xeon 5600 (Westmere-EP) and 7500 (Nehalem-EX) Performance Intel launched the Xeon 5600 series (Westmere-EP, 32nm) six-core processors on 16 March 2010 without any TPC benchmark results. In the performance world, no results almost always mean bad or not good results. Yet there is every reason to believe that the Xeon 5600 series with six-cores (X models only) will performance exactly as expected for a 50% increase in the number of cores at the same frequency (as the 5500) with no system level...(read more)

    Read the article

  • All Xen domU LVM volumes corrupt after reboot

    - by zcs
    I'm running a Debian Squeeze dom0, and after rebooting it all 7 of my domUs have data corruption. Each is setup as ext3 partition directly on a separate lvm2 volume. None of the lvm volumes will mount; all have bad superblocks. I've tried e2fsck with each superblock to no avail. What else can I try? Each domU has two LVM volumes connected to it, one for the disk and one for swap. The disk is mounted at root, formatted as a normal ext3 partition as a xen-blk device. The volumes are never mounted outside of the guest OS. I'm running Ubuntu 11.04 using the instructions here. I'm not sure that they didn't shutdown properly, all I know is they were corrupt after I issues a clean 'reboot' on the dom0. Here's a sample Xen config file; the rest are the same except for name, vcpus, memory, vif and disk. name = 'load1' vcpus = 2 memory = 512 vif = ['bridge=prbr0', 'bridge=eth0'] disk = ['phy:/dev/VolGroup00/load1-disk,xvda,w','phy:/dev/VolGroup00/load1-swap,xvdb,w'] #============================================================================ # Debian Installer specific variables def check_bool(name, value): value = str(value).lower() if value in ('t', 'tr', 'tru', 'true'): return True return False global var_check_with_default def var_check_with_default(default, var, val): if val: return val return default xm_vars.var('install', use='Install Debian, default: false', check=check_bool) xm_vars.var("install-method", use='Installation method to use "cdrom" or "network" (default: network)', check=lambda var, val: var_check_with_default('network', var, val)) # install-method == "network" xm_vars.var("install-mirror", use='Debian mirror to install from (default: http://archive.ubuntu.com/ubuntu)', check=lambda var, val: var_check_with_default('http://archive.ubuntu.com/ubuntu', var, val)) xm_vars.var("install-suite", use='Debian suite to install (default: natty)', check=lambda var, val: var_check_with_default('natty', var, val)) # install-method == "cdrom" xm_vars.var("install-media", use='Installation media to use (default: None)', check=lambda var, val: var_check_with_default(None, var, val)) xm_vars.var("install-cdrom-device", use='Installation media to use (default: xvdd)', check=lambda var, val: var_check_with_default('xvdd', var, val)) # Common options xm_vars.var("install-arch", use='Debian mirror to install from (default: amd64)', check=lambda var, val: var_check_with_default('amd64', var, val)) xm_vars.var("install-extra", use='Extra command line options (default: None)', check=lambda var, val: var_check_with_default(None, var, val)) xm_vars.var("install-installer", use='Debian installer to use (default: network uses install-mirror; cdrom uses /install.ARCH)', check=lambda var, val: var_check_with_default(None, var, val)) xm_vars.var("install-kernel", use='Debian installer kernel to use (default: uses install-installer)', check=lambda var, val: var_check_with_default(None, var, val)) xm_vars.var("install-ramdisk", use='Debian installer ramdisk to use (default: uses install-installer)', check=lambda var, val: var_check_with_default(None, var, val)) xm_vars.check() if not xm_vars.env.get('install'): bootloader="/usr/sbin/pygrub" elif xm_vars.env['install-method'] == "network": import os.path print "Install Mirror: %s" % xm_vars.env['install-mirror'] print "Install Suite: %s" % xm_vars.env['install-suite'] if xm_vars.env['install-installer']: installer = xm_vars.env['install-installer'] else: installer = xm_vars.env['install-mirror']+"/dists/"+xm_vars.env['install-suite'] + \ "/main/installer-"+xm_vars.env['install-arch']+"/current/images" print "Installer: %s" % installer print print "WARNING: Installer kernel and ramdisk are not authenticated." print if xm_vars.env.get('install-kernel'): kernelurl = xm_vars.env['install-kernel'] else: kernelurl = installer + "/netboot/xen/vmlinuz" if xm_vars.env.get('install-ramdisk'): ramdiskurl = xm_vars.env['install-ramdisk'] else: ramdiskurl = installer + "/netboot/xen/initrd.gz" import urllib class MyUrlOpener(urllib.FancyURLopener): def http_error_default(self, req, fp, code, msg, hdrs): raise IOError("%s %s" % (code, msg)) urlopener = MyUrlOpener() try: print "Fetching %s" % kernelurl kernel, _ = urlopener.retrieve(kernelurl) print "Fetching %s" % ramdiskurl ramdisk, _ = urlopener.retrieve(ramdiskurl) except IOError, _: raise elif xm_vars.env['install-method'] == "cdrom": arch_path = { 'i386': "/install.386", 'amd64': "/install.amd" } if xm_vars.env['install-media']: print "Install Media: %s" % xm_vars.env['install-media'] else: raise OptionError("No installation media given.") if xm_vars.env['install-installer']: installer = xm_vars.env['install-installer'] else: installer = arch_path[xm_vars.env['install-arch']] print "Installer: %s" % installer if xm_vars.env.get('install-kernel'): kernelpath = xm_vars.env['install-kernel'] else: kernelpath = installer + "/xen/vmlinuz" if xm_vars.env.get('install-ramdisk'): ramdiskpath = xm_vars.env['install-ramdisk'] else: ramdiskpath = installer + "/xen/initrd.gz" disk.insert(0, 'file:%s,%s:cdrom,r' % (xm_vars.env['install-media'], xm_vars.env['install-cdrom-device'])) bootloader="/usr/sbin/pygrub" bootargs="--kernel=%s --ramdisk=%s" % (kernelpath, ramdiskpath) print "From CD" else: print "WARNING: Unknown install-method: %s." % xm_vars.env['install-method'] if xm_vars.env.get('install'): # Figure out command line if xm_vars.env['install-extra']: extras=[xm_vars.env['install-extra']] else: extras=[] # Reboot will just restart the installer since this file is not # reparsed, so halt and restart that way. extras.append("debian-installer/exit/always_halt=true") extras.append("--") extras.append("quiet") console="hvc0" try: if len(vfb) >= 1: console="tty0" except NameError, e: pass extras.append("console="+ console) extra = str.join(" ", extras) print "command line is \"%s\"" % extra root There are two LVM logical volumes connected to each VM. Here's the fdisk -l output for the disk volume: Disk /dev/VolGroup00/VMNAME-disk: 8589 MB, 8589934592 bytes 255 heads, 63 sectors/track, 1044 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00029c01 Device Boot Start End Blocks Id System /dev/VolGroup00/VMNAME-disk1 1 1045 8386560 83 Linux And the swap volume: Disk /dev/VolGroup00/VMNAME-swap: 536 MB, 536870912 bytes 37 heads, 35 sectors/track, 809 cylinders Units = cylinders of 1295 * 512 = 663040 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0004faae Device Boot Start End Blocks Id System /dev/VolGroup00/VMNAME-swap1 2 809 522240 82 Linux swap / Solaris Partition 1 has different physical/logical beginnings (non-Linux?): phys=(0, 32, 33) logical=(1, 21, 19) Partition 1 has different physical/logical endings: phys=(65, 36, 35) logical=(808, 4, 28)

    Read the article

  • Site Search Engine for 1,000 page website

    - by Ian
    I manage a website with about 1,000 articles that need to be searchable by my members. The site search engines I've tried all had their own problems: Fluid Dynamics Search Engine Since it's written in perl, it was a bit hacky to integrate with my PHP-based CMS. I basically had to file_get_contents the search results page. However, FDSE had the best search results. Google CSE Ugh, the search results SUCK. It can't find documents even using unique strings. I'm so surprised that a Google search product is this bad. Nor can I get any answers on their 'help' forums, and I am a paying user. Boo, Google. Boo. Sphider Again, bad search results. Unable to locate some phrases used in link text. Better results than Google CSE though. Shame on Google that a free PHP script has better search results than their paid application. IndexTank This one looked really promising. I got all set up with their PHP API client. But it would only randomly add articles that I submitted. Out of 700+ articles I pushed to the index through their API, only 8 made it in. Unable to find any help on this subject. Update for IndexTank -- Got the above issue fixed, so this looks most promising so far. The site itself runs on php/mysql and FreeBSD, though this shouldn't matter for a web crawling indexer. I've looked at Lucene, but I don't know anything about Java or installing Java programs on my web server. I also do not have root access on my web server, if this would be required for installation. I really don't need a lot of fancy features. It just needs to be able to crawl my web site and return great (even decent!) search results. I don't need any crazy search operators. It doesn't need to index off my primary domain. It just needs to work! Thanks, Hive Mind!

    Read the article

  • Code style Tip: Case insensitive string comparison

    - by Michael Freidgeim
    Goodif (String.Compare(myString, ALL_TEXT, StringComparison.OrdinalIgnoreCase) == 0)                                {                                         return true;                                }OK(not obvious what true means) if (String.Compare(myString, ALL_TEXT, true) == 0)                                {                                         return true;                                }BAD: (non null safe) if (myString.ToLower()==ALL_TEXT.ToLower()                                {                                         return true;                                }

    Read the article

  • Pitfalls of using MySQL as your database choice?

    - by Sergio
    I've read online on multiple occassions that MySQL is a bad database. The places I've read this include some threads on Reddit, but they never seem to delve in on why it's a poor product. Is there any truth to this claim? I've never used it beyond a very simple CRUD scenario, and that was for a university project during my second year. What pitfalls, if any, are there when choosing MySQL as your database?

    Read the article

  • Create new partition on live production CentOS server

    - by Kimmel
    I have a production server that is running on CentOS. I'd like to create a partition on the server without having to reinstall everything. I have CLI and VNC access to the remote server. Is there anyway that I can create a partition safely? Here's my output from fdisk -l Disk /dev/sda: 85.9 GB, 85899345920 bytes 255 heads, 63 sectors/track, 10443 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00033d5e Device Boot Start End Blocks Id System /dev/sda1 * 1 10444 83885056 83 Linux Thanks.

    Read the article

  • ANTS Memory Profiler 7.0 Review

    - by Michael B. McLaughlin
    (This is my first review as a part of the GeeksWithBlogs.net Influencers program. It’s a program in which I (and the others who have been selected for it) get the opportunity to check out new products and services and write reviews about them. We don’t get paid for this, but we do generally get to keep a copy of the software or retain an account for some period of time on the service that we review. In this case I received a copy of Red Gate Software’s ANTS Memory Profiler 7.0, which was released in January. I don’t have any upgrade rights nor is my review guided, restrained, influenced, or otherwise controlled by Red Gate or anyone else. But I do get to keep the software license. I will always be clear about what I received whenever I do a review – I leave it up to you to decide whether you believe I can be objective. I believe I can be. If I used something and really didn’t like it, keeping a copy of it wouldn’t be worth anything to me. In that case though, I would simply uninstall/deactivate/whatever the software or service and tell the company what I didn’t like about it so they could (hopefully) make it better in the future. I don’t think it’d be polite to write up a terrible review, nor do I think it would be a particularly good use of my time. There are people who get paid for a living to review things, so I leave it to them to tell you what they think is bad and why. I’ll only spend my time telling you about things I think are good.) Overview of Common .NET Memory Problems When coming to land of managed memory from the wilds of unmanaged code, it’s easy to say to one’s self, “Wow! Now I never have to worry about memory problems again!” But this simply isn’t true. Managed code environments, such as .NET, make many, many things easier. You will never have to worry about memory corruption due to a bad pointer, for example (unless you’re working with unsafe code, of course). But managed code has its own set of memory concerns. For example, failing to unsubscribe from events when you are done with them leaves the publisher of an event with a reference to the subscriber. If you eliminate all your own references to the subscriber, then that memory is effectively lost since the GC won’t delete it because of the publishing object’s reference. When the publishing object itself becomes subject to garbage collection then you’ll get that memory back finally, but that could take a very long time depending of the life of the publisher. Another common source of resource leaks is failing to properly release unmanaged resources. When writing a class that contains members that hold unmanaged resources (e.g. any of the Stream-derived classes, IsolatedStorageFile, most classes ending in “Reader” or “Writer”), you should always implement IDisposable, making sure to use a properly written Dispose method. And when you are using an instance of a class that implements IDisposable, you should always make sure to use a 'using' statement in order to ensure that the object’s unmanaged resources are disposed of properly. (A ‘using’ statement is a nicer, cleaner looking, and easier to use version of a try-finally block. The compiler actually translates it as though it were a try-finally block. Note that Code Analysis warning 2202 (CA2202) will often be triggered by nested using blocks. A properly written dispose method ensures that it only runs once such that calling dispose multiple times should not be a problem. Nonetheless, CA2202 exists and if you want to avoid triggering it then you should write your code such that only the innermost IDisposable object uses a ‘using’ statement, with any outer code making use of appropriate try-finally blocks instead). Then, of course, there are situations where you are operating in a memory-constrained environment or else you want to limit or even eliminate allocations within a certain part of your program (e.g. within the main game loop of an XNA game) in order to avoid having the GC run. On the Xbox 360 and Windows Phone 7, for example, for every 1 MB of heap allocations you make, the GC runs; the added time of a GC collection can cause a game to drop frames or run slowly thereby making it look bad. Eliminating allocations (or else minimizing them and calling an explicit Collect at an appropriate time) is a common way of avoiding this (the other way is to simplify your heap so that the GC’s latency is low enough not to cause performance issues). ANTS Memory Profiler 7.0 When the opportunity to review Red Gate’s recently released ANTS Memory Profiler 7.0 arose, I jumped at it. In order to review it, I was given a free copy (which does not include upgrade rights for future versions) which I am allowed to keep. For those of you who are familiar with ANTS Memory Profiler, you can find a list of new features and enhancements here. If you are an experienced .NET developer who is familiar with .NET memory management issues, ANTS Memory Profiler is great. More importantly still, if you are new to .NET development or you have no experience or limited experience with memory profiling, ANTS Memory Profiler is awesome. From the very beginning, it guides you through the process of memory profiling. If you’re experienced and just want dive in however, it doesn’t get in your way. The help items GAHSFLASHDAJLDJA are well designed and located right next to the UI controls so that they are easy to find without being intrusive. When you first launch it, it presents you with a “Getting Started” screen that contains links to “Memory profiling video tutorials”, “Strategies for memory profiling”, and the “ANTS Memory Profiler forum”. I’m normally the kind of person who looks at a screen like that only to find the “Don’t show this again” checkbox. Since I was doing a review, though, I decided I should examine them. I was pleasantly surprised. The overview video clocks in at three minutes and fifty seconds. It begins by showing you how to get started profiling an application. It explains that profiling is done by taking memory snapshots periodically while your program is running and then comparing them. ANTS Memory Profiler (I’m just going to call it “ANTS MP” from here) analyzes these snapshots in the background while your application is running. It briefly mentions a new feature in Version 7, a new API that give you the ability to trigger snapshots from within your application’s source code (more about this below). You can also, and this is the more common way you would do it, take a memory snapshot at any time from within the ANTS MP window by clicking the “Take Memory Snapshot” button in the upper right corner. The overview video goes on to demonstrate a basic profiling session on an application that pulls information from a database and displays it. It shows how to switch which snapshots you are comparing, explains the different sections of the Summary view and what they are showing, and proceeds to show you how to investigate memory problems using the “Instance Categorizer” to track the path from an object (or set of objects) to the GC’s root in order to find what things along the path are holding a reference to it/them. For a set of objects, you can then click on it and get the “Instance List” view. This displays all of the individual objects (including their individual sizes, values, etc.) of that type which share the same path to the GC root. You can then click on one of the objects to generate an “Instance Retention Graph” view. This lets you track directly up to see the reference chain for that individual object. In the overview video, it turned out that there was an event handler which was holding on to a reference, thereby keeping a large number of strings that should have been freed in memory. Lastly the video shows the “Class List” view, which lets you dig in deeply to find problems that might not have been clear when following the previous workflow. Once you have at least one memory snapshot you can begin analyzing. The main interface is in the “Analysis” tab. You can also switch to the “Session Overview” tab, which gives you several bar charts highlighting basic memory data about the snapshots you’ve taken. If you hover over the individual bars (and the individual colors in bars that have more than one), you will see a detailed text description of what the bar is representing visually. The Session Overview is good for a quick summary of memory usage and information about the different heaps. You are going to spend most of your time in the Analysis tab, but it’s good to remember that the Session Overview is there to give you some quick feedback on basic memory usage stats. As described above in the summary of the overview video, there is a certain natural workflow to the Analysis tab. You’ll spin up your application and take some snapshots at various times such as before and after clicking a button to open a window or before and after closing a window. Taking these snapshots lets you examine what is happening with memory. You would normally expect that a lot of memory would be freed up when closing a window or exiting a document. By taking snapshots before and after performing an action like that you can see whether or not the memory is really being freed. If you already know an area that’s giving you trouble, you can run your application just like normal until just before getting to that part and then you can take a few strategic snapshots that should help you pin down the problem. Something the overview didn’t go into is how to use the “Filters” section at the bottom of ANTS MP together with the Class List view in order to narrow things down. The video tutorials page has a nice 3 minute intro video called “How to use the filters”. It’s a nice introduction and covers some of the basics. I’m going to cover a bit more because I think they’re a really neat, really helpful feature. Large programs can bring up thousands of classes. Even simple programs can instantiate far more classes than you might realize. In a basic .NET 4 WPF application for example (and when I say basic, I mean just MainWindow.xaml with a button added to it), the unfiltered Class List view will have in excess of 1000 classes (my simple test app had anywhere from 1066 to 1148 classes depending on which snapshot I was using as the “Current” snapshot). This is amazing in some ways as it shows you how in stark detail just how immensely powerful the WPF framework is. But hunting through 1100 classes isn’t productive, no matter how cool it is that there are that many classes instantiated and doing all sorts of awesome things. Let’s say you wanted to examine just the classes your application contains source code for (in my simple example, that would be the MainWindow and App). Under “Basic Filters”, click on “Classes with source” under “Show only…”. Voilà. Down from 1070 classes in the snapshot I was using as “Current” to 2 classes. If you then click on a class’s name, it will show you (to the right of the class name) two little icon buttons. Hover over them and you will see that you can click one to view the Instance Categorizer for the class and another to view the Instance List for the class. You can also show classes based on which heap they live on. If you chose both a Baseline snapshot and a Current snapshot then you can use the “Comparing snapshots” filters to show only: “New objects”; “Surviving objects”; “Survivors in growing classes”; or “Zombie objects” (if you aren’t sure what one of these means, you can click the helpful “?” in a green circle icon to bring up a popup that explains them and provides context). Remember that your selection(s) under the “Show only…” heading will still apply, so you should update those selections to make sure you are seeing the view you want. There are also links under the “What is my memory problem?” heading that can help you diagnose the problems you are seeing including one for “I don’t know which kind I have” for situations where you know generally that your application has some problems but aren’t sure what the behavior you have been seeing (OutOfMemoryExceptions, continually growing memory usage, larger memory use than expected at certain points in the program). The Basic Filters are not the only filters there are. “Filter by Object Type” gives you the ability to filter by: “Objects that are disposable”; “Objects that are/are not disposed”; “Objects that are/are not GC roots” (GC roots are things like static variables); and “Objects that implement _______”. “Objects that implement” is particularly neat. Once you check the box, you can then add one or more classes and interfaces that an object must implement in order to survive the filtering. Lastly there is “Filter by Reference”, which gives you the option to pare down the list based on whether an object is “Kept in memory exclusively by” a particular item, a class/interface, or a namespace; whether an object is “Referenced by” one or more of those choices; and whether an object is “Never referenced by” one or more of those choices. Remember that filtering is cumulative, so anything you had set in one of the filter sections still remains in effect unless and until you go back and change it. There’s quite a bit more to ANTS MP – it’s a very full featured product – but I think I touched on all of the most significant pieces. You can use it to debug: a .NET executable; an ASP.NET web application (running on IIS); an ASP.NET web application (running on Visual Studio’s built-in web development server); a Silverlight 4 browser application; a Windows service; a COM+ server; and even something called an XBAP (local XAML browser application). You can also attach to a .NET 4 process to profile an application that’s already running. The startup screen also has a large number of “Charting Options” that let you adjust which statistics ANTS MP should collect. The default selection is a good, minimal set. It’s worth your time to browse through the charting options to examine other statistics that may also help you diagnose a particular problem. The more statistics ANTS MP collects, the longer it will take to collect statistics. So just turning everything on is probably a bad idea. But the option to selectively add in additional performance counters from the extensive list could be a very helpful thing for your memory profiling as it lets you see additional data that might provide clues about a particular problem that has been bothering you. ANTS MP integrates very nicely with all versions of Visual Studio that support plugins (i.e. all of the non-Express versions). Just note that if you choose “Profile Memory” from the “ANTS” menu that it will launch profiling for whichever project you have set as the Startup project. One quick tip from my experience so far using ANTS MP: if you want to properly understand your memory usage in an application you’ve written, first create an “empty” version of the type of project you are going to profile (a WPF application, an XNA game, etc.) and do a quick profiling session on that so that you know the baseline memory usage of the framework itself. By “empty” I mean just create a new project of that type in Visual Studio then compile it and run it with profiling – don’t do anything special or add in anything (except perhaps for any external libraries you’re planning to use). The first thing I tried ANTS MP out on was a demo XNA project of an editor that I’ve been working on for quite some time that involves a custom extension to XNA’s content pipeline. The first time I ran it and saw the unmanaged memory usage I was convinced I had some horrible bug that was creating extra copies of texture data (the demo project didn’t have a lot of texture data so when I saw a lot of unmanaged memory I instantly figured I was doing something wrong). Then I thought to run an empty project through and when I saw that the amount of unmanaged memory was virtually identical, it dawned on me that the CLR itself sits in unmanaged memory and that (thankfully) there was nothing wrong with my code! Quite a relief. Earlier, when discussing the overview video, I mentioned the API that lets you take snapshots from within your application. I gave it a quick trial and it’s very easy to integrate and make use of and is a really nice addition (especially for projects where you want to know what, if any, allocations there are in a specific, complicated section of code). The only concern I had was that if I hadn’t watched the overview video I might never have known it existed. Even then it took me five minutes of hunting around Red Gate’s website before I found the “Taking snapshots from your code" article that explains what DLL you need to add as a reference and what method of what class you should call in order to take an automatic snapshot (including the helpful warning to wrap it in a try-catch block since, under certain circumstances, it can raise an exception, such as trying to call it more than 5 times in 30 seconds. The difficulty in discovering and then finding information about the automatic snapshots API was one thing I thought could use improvement. Another thing I think would make it even better would be local copies of the webpages it links to. Although I’m generally always connected to the internet, I imagine there are more than a few developers who aren’t or who are behind very restrictive firewalls. For them (and for me, too, if my internet connection happens to be down), it would be nice to have those documents installed locally or to have the option to download an additional “documentation” package that would add local copies. Another thing that I wish could be easier to manage is the Filters area. Finding and setting individual filters is very easy as is understanding what those filter do. And breaking it up into three sections (basic, by object, and by reference) makes sense. But I could easily see myself running a long profiling session and forgetting that I had set some filter a long while earlier in a different filter section and then spending quite a bit of time trying to figure out why some problem that was clearly visible in the data wasn’t showing up in, e.g. the instance list before remembering to check all the filters for that one setting that was only culling a few things from view. Some sort of indicator icon next to the filter section names that appears you have at least one filter set in that area would be a nice visual clue to remind me that “oh yeah, I told it to only show objects on the Gen 2 heap! That’s why I’m not seeing those instances of the SuperMagic class!” Something that would be nice (but that Red Gate cannot really do anything about) would be if this could be used in Windows Phone 7 development. If Microsoft and Red Gate could work together to make this happen (even if just on the WP7 emulator), that would be amazing. Especially given the memory constraints that apps and games running on mobile devices need to work within, a good memory profiler would be a phenomenally helpful tool. If anyone at Microsoft reads this, it’d be really great if you could make something like that happen. Perhaps even a (subsidized) custom version just for WP7 development. (For XNA games, of course, you can create a Windows version of the game and use ANTS MP on the Windows version in order to get a better picture of your memory situation. For Silverlight on WP7, though, there’s quite a bit of educated guess work and WeakReference creation followed by forced collections in order to find the source of a memory problem.) The only other thing I found myself wanting was a “Back” button. Between my Windows Phone 7, Zune, and other things, I’ve grown very used to having a “back stack” that lets me just navigate back to where I came from. The ANTS MP interface is surprisingly easy to use given how much it lets you do, and once you start using it for any amount of time, you learn all of the different areas such that you know where to go. And it does remember the state of the areas you were previously in, of course. So if you go to, e.g., the Instance Retention Graph from the Class List and then return back to the Class List, it will remember which class you had selected and all that other state information. Still, a “Back” button would be a welcome addition to a future release. Bottom Line ANTS Memory Profiler is not an inexpensive tool. But my time is valuable. I can easily see ANTS MP saving me enough time tracking down memory problems to justify it on a cost basis. More importantly to me, knowing what is happening memory-wise in my programs and having the confidence that my code doesn’t have any hidden time bombs in it that will cause it to OOM if I leave it running for longer than I do when I spin it up real quickly for debugging or just to see how a new feature looks and feels is a good feeling. It’s a feeling that I like having and want to continue to have. I got the current version for free in order to review it. Having done so, I’ve now added it to my must-have tools and will gladly lay out the money for the next version when it comes out. It has a 14 day free trial, so if you aren’t sure if it’s right for you or if you think it seems interesting but aren’t really sure if it’s worth shelling out the money for it, give it a try.

    Read the article

  • Can't shrink Windows Boot NTFS disk: ERROR(5): Could not map attribute 0x80 in inode, Input/output error

    - by arcyqwerty
    Ubuntu 12.04 LTS, all updates current as of 7/3/2012 gksudo gparted Shrink /dev/sda2 from 367GB to 307GB GParted 0.11.0 --enable-libparted-dmraid Libparted 2.3 Shrink /dev/sda2 from 367.00 GiB to 307.00 GiB 00:32:57 ( ERROR ) calibrate /dev/sda2 00:00:00 ( SUCCESS ) path: /dev/sda2 start: 20,484,096 end: 790,142,975 size: 769,658,880 (367.00 GiB) check file system on /dev/sda2 for errors and (if possible) fix them 00:00:53 ( SUCCESS ) ntfsresize -P -i -f -v /dev/sda2 ntfsresize v2012.1.15AR.1 (libntfs-3g) Device name : /dev/sda2 NTFS volume version: 3.1 Cluster size : 4096 bytes Current volume size: 394065338880 bytes (394066 MB) Current device size: 394065346560 bytes (394066 MB) Checking for bad sectors ... Checking filesystem consistency ... Accounting clusters ... Space in use : 327950 MB (83.2%) Collecting resizing constraints ... Estimating smallest shrunken size supported ... File feature Last used at By inode $MFT : 389998 MB 0 Multi-Record : 394061 MB 386464 $MFTMirr : 314823 MB 1 Compressed : 394064 MB 1019521 Sparse : 330887 MB 752454 Ordinary : 393297 MB 706060 You might resize at 327949758464 bytes or 327950 MB (freeing 66116 MB). Please make a test run using both the -n and -s options before real resizing! shrink file system 00:32:04 ( ERROR ) run simulation 00:32:04 ( ERROR ) ntfsresize -P --force --force /dev/sda2 -s 329640837119 --no-action ntfsresize v2012.1.15AR.1 (libntfs-3g) Device name : /dev/sda2 NTFS volume version: 3.1 Cluster size : 4096 bytes Current volume size: 394065338880 bytes (394066 MB) Current device size: 394065346560 bytes (394066 MB) New volume size : 329640829440 bytes (329641 MB) Checking filesystem consistency ... Accounting clusters ... Space in use : 327950 MB (83.2%) Collecting resizing constraints ... Needed relocations : 13300525 (54479 MB) Schedule chkdsk for NTFS consistency check at Windows boot time ... Resetting $LogFile ... (this might take a while) Relocating needed data ... Updating $BadClust file ... Updating $Bitmap file ... ERROR(5): Could not map attribute 0x80 in inode 1667593: Input/output error ======================================== Windows has run chkdsk successfully (on boot) several times now

    Read the article

  • How do I recover from a Linux CentOS 4.6 Operating System Crash

    - by Greg Omebije
    Our x86 Linux server running CentOS4.6 has crashed. The machine boots only to the Grub prompt. We have tried using the "rescue mode" to recover the System, but it hasn't worked. How can we fix this problem, so that the machine boots normally? How can we fix this problem to the point were we can recover our files from the server Our Linux Server Configuration: Dell PowerEdge 1950 Intel Xeon 2 HDD (146GB each) 4GB RAM Hardware and Software raid setup CentOS 4.6 We used Sysrecord to boot the computer: the following are the output of fdisk -l Disk /dev/sda: 293.3 GB, 292326211584 255 heads, 63 sectors/track, 35539 Cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00000080 Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 17769 142625070 8e Linux LVM

    Read the article

  • Where can I find luxury goods advertisements for my website?

    - by Nazariy
    I'm running business directory for tourist attractions, and I would like to fill some empty blocks with useful advertisements like flight operators, car retailers, luxury goods etc. We have tried Google AdSense but it's full of cheap, pointless and irrelevant advertisement that would make our website look cheap and bad. So I'm curious is there any centralised resources for luxury goods and services?

    Read the article

  • Upgrading from 12.10 to 13.04 -> dpkg: error processing sudo (--configure)

    - by Korrigan Nagirrok
    Here's the deal and reason I'm asking for your help. Last night I went on upgrading my Xubuntu 12.10 installation to 13.04, so at tty1 I run the command sudo do-release-upgrade and everything seemed to went well except that after rebooting and when I run sudo apt-get update && sudo apt-get upgrade I get this error: sudo apt-get update && sudo apt-get upgrade Hit http://pt.archive.ubuntu.com raring Release.gpg Hit http://pt.archive.ubuntu.com raring-updates Release.gpg Hit http://dl.google.com stable Release.gpg Hit http://pt.archive.ubuntu.com raring-backports Release.gpg Hit http://pt.archive.ubuntu.com raring Release Hit http://archive.canonical.com raring Release.gpg Hit http://ppa.launchpad.net raring Release.gpg Hit http://pt.archive.ubuntu.com raring-updates Release Hit http://extras.ubuntu.com raring Release.gpg Hit http://pt.archive.ubuntu.com raring-backports Release Hit http://dl.google.com stable Release Hit http://pt.archive.ubuntu.com raring/main Sources Hit http://pt.archive.ubuntu.com raring/restricted Sources Hit http://extras.ubuntu.com raring Release Hit http://archive.canonical.com raring Release Hit http://ppa.launchpad.net raring Release.gpg Hit http://pt.archive.ubuntu.com raring/universe Sources Hit http://pt.archive.ubuntu.com raring/multiverse Sources Hit http://dl.google.com stable/main i386 Packages Get:1 http://security.ubuntu.com raring-security Release.gpg [933 B] Hit http://pt.archive.ubuntu.com raring/main i386 Packages Hit http://extras.ubuntu.com raring/main Sources Hit http://ppa.launchpad.net raring Release Hit http://archive.canonical.com raring/partner i386 Packages Hit http://pt.archive.ubuntu.com raring/restricted i386 Packages Hit http://pt.archive.ubuntu.com raring/universe i386 Packages Hit http://extras.ubuntu.com raring/main i386 Packages Hit http://pt.archive.ubuntu.com raring/multiverse i386 Packages Hit http://ppa.launchpad.net raring Release Hit http://pt.archive.ubuntu.com raring/main Translation-en Hit http://ppa.launchpad.net raring/main Sources Hit http://ppa.launchpad.net raring/main i386 Packages Hit http://pt.archive.ubuntu.com raring/multiverse Translation-en Hit http://pt.archive.ubuntu.com raring/restricted Translation-en Hit http://pt.archive.ubuntu.com raring/universe Translation-en Hit http://pt.archive.ubuntu.com raring-updates/main Sources Hit http://pt.archive.ubuntu.com raring-updates/restricted Sources Hit http://ppa.launchpad.net raring/main Sources Hit http://pt.archive.ubuntu.com raring-updates/universe Sources Hit http://pt.archive.ubuntu.com raring-updates/multiverse Sources Hit http://pt.archive.ubuntu.com raring-updates/main i386 Packages Hit http://ppa.launchpad.net raring/main i386 Packages Hit http://pt.archive.ubuntu.com raring-updates/restricted i386 Packages Hit http://pt.archive.ubuntu.com raring-updates/universe i386 Packages Hit http://pt.archive.ubuntu.com raring-updates/multiverse i386 Packages Ign http://dl.google.com stable/main Translation-en_US Hit http://pt.archive.ubuntu.com raring-updates/main Translation-en Ign http://archive.canonical.com raring/partner Translation-en_US Ign http://extras.ubuntu.com raring/main Translation-en_US Ign http://dl.google.com stable/main Translation-en Ign http://archive.canonical.com raring/partner Translation-en Hit http://pt.archive.ubuntu.com raring-updates/multiverse Translation-en Ign http://extras.ubuntu.com raring/main Translation-en Hit http://pt.archive.ubuntu.com raring-updates/restricted Translation-en Hit http://pt.archive.ubuntu.com raring-updates/universe Translation-en Hit http://pt.archive.ubuntu.com raring-backports/main Sources Hit http://pt.archive.ubuntu.com raring-backports/restricted Sources Hit http://pt.archive.ubuntu.com raring-backports/universe Sources Hit http://pt.archive.ubuntu.com raring-backports/multiverse Sources Hit http://pt.archive.ubuntu.com raring-backports/main i386 Packages Hit http://pt.archive.ubuntu.com raring-backports/restricted i386 Packages Hit http://pt.archive.ubuntu.com raring-backports/universe i386 Packages Hit http://pt.archive.ubuntu.com raring-backports/multiverse i386 Packages Hit http://pt.archive.ubuntu.com raring-backports/main Translation-en Hit http://pt.archive.ubuntu.com raring-backports/multiverse Translation-en Get:2 http://security.ubuntu.com raring-security Release [40.8 kB] Hit http://pt.archive.ubuntu.com raring-backports/restricted Translation-en Hit http://pt.archive.ubuntu.com raring-backports/universe Translation-en Ign http://ppa.launchpad.net raring/main Translation-en_US Ign http://ppa.launchpad.net raring/main Translation-en Get:3 http://security.ubuntu.com raring-security/main Sources [2,109 B] Ign http://ppa.launchpad.net raring/main Translation-en_US Ign http://ppa.launchpad.net raring/main Translation-en Get:4 http://security.ubuntu.com raring-security/restricted Sources [14 B] Get:5 http://security.ubuntu.com raring-security/universe Sources [14 B] Get:6 http://security.ubuntu.com raring-security/multiverse Sources [14 B] Get:7 http://security.ubuntu.com raring-security/main i386 Packages [3,670 B] Get:8 http://security.ubuntu.com raring-security/restricted i386 Packages [14 B] Get:9 http://security.ubuntu.com raring-security/universe i386 Packages [2,824 B] Get:10 http://security.ubuntu.com raring-security/multiverse i386 Packages [14 B] Ign http://pt.archive.ubuntu.com raring/main Translation-en_US Ign http://pt.archive.ubuntu.com raring/multiverse Translation-en_US Ign http://pt.archive.ubuntu.com raring/restricted Translation-en_US Ign http://pt.archive.ubuntu.com raring/universe Translation-en_US Ign http://pt.archive.ubuntu.com raring-updates/main Translation-en_US Ign http://pt.archive.ubuntu.com raring-updates/multiverse Translation-en_US Hit http://security.ubuntu.com raring-security/main Translation-en Ign http://pt.archive.ubuntu.com raring-updates/restricted Translation-en_US Ign http://pt.archive.ubuntu.com raring-updates/universe Translation-en_US Ign http://pt.archive.ubuntu.com raring-backports/main Translation-en_US Ign http://pt.archive.ubuntu.com raring-backports/multiverse Translation-en_US Ign http://pt.archive.ubuntu.com raring-backports/restricted Translation-en_US Hit http://security.ubuntu.com raring-security/multiverse Translation-en Ign http://pt.archive.ubuntu.com raring-backports/universe Translation-en_US Hit http://security.ubuntu.com raring-security/restricted Translation-en Hit http://security.ubuntu.com raring-security/universe Translation-en Ign http://security.ubuntu.com raring-security/main Translation-en_US Ign http://security.ubuntu.com raring-security/multiverse Translation-en_US Ign http://security.ubuntu.com raring-security/restricted Translation-en_US Ign http://security.ubuntu.com raring-security/universe Translation-en_US Fetched 50.4 kB in 6s (7,454 B/s) Reading package lists... Done Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2 not fully installed or removed. Need to get 0 B/373 kB of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? Y dpkg: error processing sudo (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration. No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of ubuntu-minimal: ubuntu-minimal depends on sudo; however: Package sudo is not configured yet. dpkg: error processing ubuntu-minimal (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: sudo ubuntu-minimal E: Sub-process /usr/bin/dpkg returned an error code (1) I've tried everything I thought logical, like sudo dpkg --configure -a dpkg: error processing sudo (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration. dpkg: dependency problems prevent configuration of ubuntu-minimal: ubuntu-minimal depends on sudo; however: Package sudo is not configured yet. dpkg: error processing ubuntu-minimal (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: sudo ubuntu-minimal sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2 not fully installed or removed. Need to get 0 B/373 kB of archives. After this operation, 0 B of additional disk space will be used. dpkg: error processing sudo (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration. dpkg: dependency problems prevent configuration of ubuntu-minimal: ubuntu-minimal depends on sudo; however: Package sudo is not configured yet. dpkg: error processing ubuntu-minimal (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already Errors were encountered while processing: sudo ubuntu-minimal E: Sub-process /usr/bin/dpkg returned an error code (1) Can someone help me, please. Edit: Here's some more info that could be of help for anyone. The output of apt-cache policy linux-image-generic-pae linux-generic-pae is linux-image-generic-pae: Installed: (none) Candidate: 3.8.0.19.35 Version table: 3.8.0.19.35 0 500 http://pt.archive.ubuntu.com/ubuntu/ raring/main i386 Packages linux-generic-pae: Installed: (none) Candidate: 3.8.0.19.35 Version table: 3.8.0.19.35 0 500 http://pt.archive.ubuntu.com/ubuntu/ raring/main i386 Packages

    Read the article

  • 1Tb disk formatted on Linux won't mount on windows nor mac

    - by Pedro MC
    I have an external HD (western digital) with 1Tb. I use Linux but I wanted to reserve a cross platform partition on the disk. I decided to create two partitions and used the "disks" application to do it. I created one partition with the LUKS (version 1) encryption and the other one, cross platform, in NTFS filesystem. Things work fine on my OS but when I try to use the disk (the cross platform partition) on both windows and mac the device is not recognized. What could it be? Next, output of "sfdisk -l /dev/sdb": Disk /dev/sdb: 121600 cylinders, 255 heads, 63 sectors/track Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0 Device Boot Start End #cyls #blocks Id System /dev/sdb1 0+ 36473- 36473- 292968750 83 Linux /dev/sdb2 36473+ 121600- 85128- 683789062+ 83 Linux /dev/sdb3 0 - 0 0 0 Empty /dev/sdb4 0 - 0 0 0 Empty

    Read the article

  • Windows XP boot: black screen with cursor after BIOS screen

    - by Radio
    Here is a weird one, Got computer with Windows XP. It's getting stuck on a black screen with cursor blinking. What did I do: - Boot from installation CD (recovery option - command line): chkdsk C: /R copy D:\i386\ntdetect.com c:\ copy D:\i386\ntldr c:\ fixmbr fixboot Chkdsk showed 0 bad sectors and no problems during scan. dir on C:\ shows all directories and files in place (Windows, Program Files, Documents and Settings). BIOS shows correct boot drive. Still does not boot. Not sure what to think of. Please help. UPDATE: Just performed these steps: Backed up current disk C: (without MBR) using True Image to external hard drive Ran Windows XP clean installation with deleting all partitions and creating new one. Hard drive booted fine into Windows GUI installation!!! Then: Interrupted installation. Booted from True Image recovery CD and restored archive of disk C to an new partition. Same issue with black screen.

    Read the article

  • 30 days and its ginger–help me get to £400

    - by simonsabin
    Its taken 30 days and I have managed, to my surprise, to grow a ginger Mo in support of Movember. http://uk.movember.com/mospace/6154809 I didn’t quite reach my supposed lookaliki but I don’t think it was a bad effort. Next time I’ll leave my hair for a few months before. If you fancy donating then you can do so here https://www.movember.com/uk/donate/payment/member_id/6154809/ I only need £3 to reach £400 which would be great. The team have just passed £2000 which is awesome....(read more)

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >