Search Results

Search found 9044 results on 362 pages for 'bad sector'.

Page 57/362 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Why I cannot copy install.wim from Windows 7 ISO to USB (in linux env)

    - by fastreload
    I need to make a USB bootable disk of Windows 7 ISO. My USB is formatted to NTFS, ISO is not corrupt. I can copy install.wim elsewhere but I cannot copy it to USB. I even tried rsync. rsync error sources/install.wim rsync: writefd_unbuffered failed to write 4 bytes to socket [sender]: Broken pipe (32) rsync: write failed on "/media/52E866F5450158A4/sources/install.wim": Input/output error (5) rsync error: error in file IO (code 11) at receiver.c(322) [receiver=3.0.8] Stat for windows.vim File: `X15-65732 (2)/sources/install.wim' Size: 2188587580 Blocks: 4274600 IO Block: 4096 regular file Device: 801h/2049d Inode: 671984 Links: 1 Access: (0664/-rw-rw-r--) Uid: ( 1000/ umur) Gid: ( 1000/ umur) Access: 2011-10-17 22:59:54.754619736 +0300 Modify: 2009-07-14 12:26:40.000000000 +0300 Change: 2011-10-17 22:55:47.327358410 +0300 fdisk -l Disk /dev/sdd: 8103 MB, 8103395328 bytes 196 heads, 32 sectors/track, 2523 cylinders, total 15826944 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xc3072e18 Device Boot Start End Blocks Id System /dev/sdd1 * 32 15826943 7913456 7 HPFS/NTFS/exFAT hdparm -I /dev/sdd: SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 0a 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ATA device, with non-removable media Model Number: UF?F?A????U]r???U u??tF?f?`~ Serial Number: ?@??~| Firmware Revision: ????V? Media Serial Num: $I?vnladip raititnot baelErrrol aoidgn Media Manufacturer: o eparitgns syetmiM Standards: Used: unknown (minor revision code 0x0c75) Supported: 12 8 6 Likely used: 12 Configuration: Logical max current cylinders 17218 0 heads 0 0 sectors/track 128 0 -- Logical/Physical Sector size: 512 bytes device size with M = 1024*1024: 0 MBytes device size with M = 1000*1000: 0 MBytes cache/buffer size = unknown Capabilities: IORDY(may be)(cannot be disabled) Queue depth: 11 Standby timer values: spec'd by Vendor R/W multiple sector transfer: Max = 0 Current = ? Recommended acoustic management value: 254, current value: 62 DMA: not supported PIO: unknown * reserved 69[0] * reserved 69[1] * reserved 69[3] * reserved 69[4] * reserved 69[7] Security: Master password revision code = 60253 not supported not enabled not locked not frozen not expired: security count not supported: enhanced erase 71112min for SECURITY ERASE UNIT. 172min for ENHANCED SECURITY ERASE UNIT. Integrity word not set (found 0xaa55, expected 0x80a5)

    Read the article

  • How to make a Linux software RAID1 detect disc corruption?

    - by Paul
    This is one of the nightmare days: A virtualized server running on a Linux SW-RAID1 runs a VM that exhibits random segfaults in seemingly random codechunks. While debugging I find that a file gives different md5sums on each and every run. Digging deeper I find this: The raw disc partitions that make up the RAID1 mirror contain 2 bit-differences and ca. 9 sectors are completely empty on one disc and filled with data on the other disc. Obviously Linux gives back a sector from a undeterministically chosen disc of the mirror set. So sometimes the same sector is returned OK, sometimes the corrupted is given back. The docs say: RAID cannot and is not supposed to guard against data corruption on the media. Therefore, it doesn't make any sense either, to purposely corrupt data (using dd for example) on a disk to see how the RAID system will handle that. It is most likely (unless you corrupt the RAID superblock) that the RAID layer will never find out about the corruption, but your filesystem on the RAID device will be corrupted. Thanks. That will help me sleep. :-/ Is there a way to have Linux at least detect this corruption by using sector checksumming or something like that? Would this be detected in a RAID5 setup? Is this the moment I wish I used ZFS or btrfs (once it becomes usable without uber-admin capabilities)?

    Read the article

  • Need a place to store a few bytes of meta information on storage media

    - by Jason C
    I'm working on an embedded project. I need a place to store some filesystem-independent meta information on a storage device. The device has an MSDOS partition table. The device also may have unallocated space (depending on its size) but it will be TRIMmed (and also may be blown away by new partitions in the future). I need a location on the device that is not unallocated and that has a low risk of being touched (outside of completely erasing the device). The device is only guaranteed to have an MBR at the point the meta data needs to first be written; meaning there are no EBRs/VBRs present that I could use. There are 446 bytes at the very start of the device available for MBR bootstrap code. Currently my only idea is to store data at the end of this block. However, the device is bootable and I have no way of knowing if I'd be blowing away bootstrap code or not. The sector size is 512 bytes and the MBR is the first sector, I'm pretty sure (correct me if I'm wrong) that that means the second sector is available for use by partition data, so I can't use that either. Does anybody have any ideas? I need 4 bytes of space.

    Read the article

  • How do I copy files between harddrives on Ubuntu CLI?

    - by ed209
    I have a dedicated server with a 120gb main ssd. The server happens to come with a couple of 3000GB hard drives. I'd like to use them to back up my main drive. Preferably, I'd like one as an exact copy of the main SSD and the other with incremental backups of the mysql database and a user uploads file. These are the drives I have Disk /dev/sda: 120.0 GB, 120034123776 bytes 255 heads, 63 sectors/track, 14593 cylinders, total 234441648 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000f2e18 Device Boot Start End Blocks Id System /dev/sda1 2048 4196352 2097152+ 83 Linux /dev/sda2 4198400 5246976 524288+ 83 Linux /dev/sda3 5249024 234441647 114596312 83 Linux Disk /dev/sdb: 3000.6 GB, 3000592982016 bytes 255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Disk /dev/sdb doesn't contain a valid partition table Disk /dev/sdc: 3000.6 GB, 3000592982016 bytes 255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Disk /dev/sdc doesn't contain a valid partition table The first problem I have, is that I have no idea how to copy from one drive to another. Kind of embarrassing I know, but I don't know where to start. I'm thinking of this in terms of Mac OS cli where I'm able to copy between /Volumes - is there an equivalent? (there is nothing under /mnt or /media)

    Read the article

  • Disk doesn't contain a valid partition table

    - by Jeevan Dongre
    I was running a m1.small instance ec2 ubuntu instance. I was running out of disk space, so I upgraded my instance to medium. When I upgraded I actually got 429.5 GB of space and after that I added 10 gb of volume too. When I run the "sudo fdisk -l" command I got this results. Disk /dev/sda1: 8589 MB, 8589934592 bytes 255 heads, 63 sectors/track, 1044 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sda1 doesn't contain a valid partition table Disk /dev/sda2: 429.5 GB, 429461078016 bytes 255 heads, 63 sectors/track, 52212 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sda2 doesn't contain a valid partition table Disk /dev/sdf: 10.7 GB, 10737418240 bytes 255 heads, 63 sectors/track, 1305 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 sda1 is the primary parition and sda2 is what I got added upgrading my system to medium. But the problem persists, I am not able to pull the code from git, it is giving me this error. remote: Counting objects: 409, done. remote: Compressing objects: 100% (236/236), done. fatal: write error: No space left on device fatal: index-pack failed

    Read the article

  • Boot records messed on dual boot (win7 and ubuntu) machine with SSD and HDD

    - by Michael
    i have a lenovo ideapad y570 with two hard drives: SSD and normal HDD both managed by RapidDrive and windows 7 pre-installed. First, i have shrunk my 500 GB HDD a little bit to make some place for a linux installation. Then i installed linux mint 12 to it, also installed grub onto the drive (dev/sdb). Installation programm has not allowed me to install grub on sda. Then i replaced linux mint with ubuntu 12.04 but installed grub onto the SSD (which is dev/sda and was the default-option). After that i could boot into my windows, only ubuntu worked. So i did a research, and tried: rewriting mbr of windows into sda1, reinstalling grub, replacing grub2 with grub-legacy, and now i think my partitions table are totally messed. Here is fdisk -l output: ubuntu@ubuntu:~$ sudo fdisk -l Disk /dev/sda: 64.0 GB, 64023257088 bytes 255 heads, 63 sectors/track, 7783 cylinders, total 125045424 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sda1 * 2048 411647 204800 7 HPFS/NTFS/exFAT /dev/sda2 411648 1009430959 504509656 7 HPFS/NTFS/exFAT Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x5e5d1cc8 Device Boot Start End Blocks Id System /dev/sdb1 * 1979 884389887 442193954+ 12 Compaq diagnostics /dev/sdb2 884391934 976771071 46189569 5 Extended /dev/sdb5 884391936 937705471 26656768 83 Linux /dev/sdb6 937707520 967006207 14649344 83 Linux /dev/sdb7 967008256 976771071 4881408 82 Linux swap / Solaris I also cant mount any windows partitions to recover data. And when i open gparted, the whole sda-disk appears unallocated and it states "can not have a partition outside the disk!", also the end-sector address of /dev/sda2 confuses me. If i boot from the SSD, it throws some mbr error and wont boot, if i boot from the HDD, i only get the grub bash. How do i restore the partition tables? I can boot only from a live-cd at the machine. Thanks for any help.

    Read the article

  • Is it safe to format this partition?

    - by xanesis4
    On a ubuntu server I own, I am running out of space. When I ran sudo parted /dev/sda -l to find all available drives, I got this: Model: ATA ST31000528AS (scsi) Disk /dev/sda: 1000GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 256MB 255MB primary ext2 boot 2 257MB 1000GB 1000GB extended 5 257MB 1000GB 1000GB logical lvm Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/server--vg-swap_1: 2135MB Sector size (logical/physical): 512B/512B Partition Table: loop Number Start End Size File system Flags 1 0.00B 2135MB 2135MB linux-swap(v1) Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/server--vg-root: 998GB Sector size (logical/physical): 512B/512B Partition Table: loop Number Start End Size File system Flags 1 0.00B 998GB 998GB ext4 I understand /dev/mapper/server--vg-root is the filesystem, and /dev/sda1 has some stuff related to GRUB. But, what about /dev/sda2 and /dev/sda5? When I tried to mount /dev/sda2, it said that I needed to specify the file system, which according to the table, is nonexistent. So, is it safe to format this with, say ext4 and mount it? Also, when I tried to mount /dev/sd5, it gave me this error: mount: unknown filesystem type 'LVM2_member' I assume it is NOT save to reformat this. If I'm wrong, then that would be great, because I could save some space. Please let me know either way. Thanks in advance! UPDATE: Here is the result of mount: /dev/mapper/server--vg-root on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) /dev/sda1 on /boot type ext2 (rw,acl) /dev/sda1 on /media/hd2 type ext2 (rw)

    Read the article

  • Validating Petabytes of Data with Regularity and Thoroughness

    - by rickramsey
    by Brian Zents When former Intel CEO Andy Grove said “only the paranoid survive,” he wasn’t necessarily talking about tape storage administrators, but it’s a lesson they’ve learned well. After all, tape storage is the last line of defense to prevent data loss, so tape administrators are extra cautious in making sure their data is secure. Not surprisingly, we are often asked for ways to validate tape media and the files on them. In the past, an administrator could validate the media, but doing so was often tedious or disruptive or both. The debut of the Data Integrity Validation (DIV) and Library Media Validation (LMV) features in the Oracle T10000C drive helped eliminate many of these pains. Also available with the Oracle T10000D drive, these features use hardware-assisted CRC checks that not only ensure the data is written correctly the first time, but also do so much more efficiently. Traditionally, a CRC check takes at least 25 seconds per 4GB file with a 2:1 compression ratio, but the T10000C/D drives can reduce the check to a maximum of nine seconds because the entire check is contained within the drive. No data needs to be sent to a host application. A time savings of at least 64 percent is extremely beneficial over the course of checking an entire 8.5TB T10000D tape. While the DIV and LMV features are better than anything else out there, what storage administrators really need is a way to check petabytes of data with regularity and thoroughness. With the launch of Oracle StorageTek Tape Analytics (STA) 2.0 in April, there is finally a solution that addresses this longstanding need. STA bundles these features into one interface to automate all media validation activities across all Oracle SL3000 and SL8500 tape libraries in an environment. And best of all, the validation process can be associated with the health checks an administrator would be doing already through STA. In fact, STA validates the media based on any of the following policies: Random Selection – Randomly selects media for validation whenever a validation drive in the standalone library or library complex is available. Media Health = Action – Selects media that have had a specified number of successive exchanges resulting in an Exchange Media Health of “Action.” You can specify from one to five exchanges. Media Health = Evaluate – Selects media that have had a specified number of successive exchanges resulting in an Exchange Media Health of “Evaluate.” You can specify from one to five exchanges. Media Health = Monitor – Selects media that have had a specified number of successive exchanges resulting in an Exchange Media Health of “Monitor.” You can specify from one to five exchanges. Extended Period of Non-Use – Selects media that have not had an exchange for a specified number of days. You can specify from 365 to 1,095 days (one to three years). Newly Entered – Selects media that have recently been entered into the library. Bad MIR Detected – Selects media with an exchange resulting in a “Bad MIR Detected” error. A bad media information record (MIR) indicates degraded high-speed access on the media. To avoid disrupting host operations, an administrator designates certain drives for media validation operations. If a host requests a file from media currently being validated, the host’s request takes priority. To ensure that the administrator really knows it is the media that is bad, as opposed to the drive, STA includes drive calibration and qualification features. In addition, validation requests can be re-prioritized or cancelled as needed. To ensure that a specific tape isn’t validated too often, STA prevents a tape from being validated twice within 24 hours via one of the policies described above. A tape can be validated more often if the administrator manually initiates the validation. When the validations are complete, STA reports the results. STA does not report simply a “good” or “bad” status. It also reports if media is even degraded so the administrator can migrate the data before there is a true failure. From that point, the administrators’ paranoia is relieved, as they have the necessary information to make a sound decision about the health of the tapes in their environment. About the Photograph Photograph taken by Rick Ramsey in Death Valley, California, May 2014 - Brian Follow OTN Garage on: Web | Facebook | Twitter | YouTube

    Read the article

  • PASS Summit 2011 &ndash; Part II

    - by Tara Kizer
    I arrived in Seattle last Monday afternoon to attend PASS Summit 2011.  I had really wanted to attend Gail Shaw’s (blog|twitter) and Grant Fritchey’s (blog|twitter) pre-conference seminar “All About Execution Plans” on Monday, but that would have meant flying out on Sunday which I couldn’t do.  On Tuesday, I attended Allan Hirt’s (blog|twitter) pre-conference seminar entitled “A Deep Dive into AlwaysOn: Failover Clustering and Availability Groups”.  Allan is a great speaker, and his seminar was packed with demos and information about AlwaysOn in SQL Server 2012.  Unfortunately, I have lost my notes from this seminar and the presentation materials are only available on the pre-con DVD.  Hmpf! On Wednesday, I attended Gail Shaw’s “Bad Plan! Sit!”, Andrew Kelly’s (blog|twitter) “SQL 2008 Query Statistics”, Dan Jones’ (blog|twitter) “Improving your PowerShell Productivity”, and Brent Ozar’s (blog|twitter) “BLITZ! The SQL – More One Hour SQL Server Takeovers”.  In Gail’s session, she went over how to fix bad plans and bad query patterns.  Update your stale statistics! How to fix bad plans Use local variables – optimizer can’t sniff it, so it’ll optimize for “average” value Use RECOMPILE (at the query or stored procedure level) – CPU hit OPTIMIZE FOR hint – most common value you’ll pass How to fix bad query patterns Don’t use them – ha! Catch-all queries Use dynamic SQL OPTION (RECOMPILE) Multiple execution paths Split into multiple stored procedures OPTION (RECOMPILE) Modifying parameter values Use local variables Split into outer and inner procedure OPTION (RECOMPILE) She also went into “last resort” and “very last resort” options, but those are risky unless you know what you are doing.  For the average Joe, she wouldn’t recommend these.  Examples are query hints and plan guides. While I enjoyed Andrew’s session, I didn’t take any notes as it was familiar material.  Andrew is a great speaker though, and I’d highly recommend attending his sessions in the future. Next up was Dan’s PowerShell session.  I need to look into profiles, manifests, function modules, and function import scripts more as I just didn’t quite grasp these concepts.  I am attending a PowerShell training class at the end of November, so maybe that’ll help clear it up.  I really enjoyed the Excel integration demo.  It was very cool watching PowerShell build the spreadsheet in real-time.  I must look into this more!  On a side note, I am jealous of Dan’s hair.  Fabulous hair! Brent’s session showed us how to quickly gather information about a server that you will be taking over database administration duties for.  He wrote a script to do a fast health check and then later wrapped it into a stored procedure, sp_Blitz.  I can’t wait to use this at my work even on systems where I’ve been the primary DBA for years, maybe there’s something I’ve overlooked.  We are using EPM to help standardize our environment and uncover problems, but sp_Blitz will definitely still help us out.  He even provides a cloud-based update feature, sp_BlitzUpdate, for sp_Blitz so you don’t have to constantly update it when he makes a change.  I think I’ll utilize his update code for some other challenges that we face at my work.

    Read the article

  • SQL SERVER – Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – T-SQL Example – Part 2 of 2

    - by pinaldave
    Yesterday I wrote a real world story of how a friend who thought they have an issue with intrusion or virus whereas the issue was really in the code. I strongly suggest you read my earlier blog post Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – Part 1 of 2 before continuing this blog post as this is second part of the first blog post. Let me reproduce the simple scenario in T-SQL. Building Sample Data USE [TestDB] GO -- Creating Table Products CREATE TABLE [dbo].[Products]( [ProductID] [int] NOT NULL, [ProductDesc] [varchar](50) NOT NULL, CONSTRAINT [PK_Products] PRIMARY KEY CLUSTERED ( [ProductID] ASC )) ON [PRIMARY] GO -- Creating Table ProductDetails CREATE TABLE [dbo].[ProductDetails]( [ProductDetailID] [int] NOT NULL, [ProductID] [int] NOT NULL, [Total] [int] NOT NULL, CONSTRAINT [PK_ProductDetails] PRIMARY KEY CLUSTERED ( [ProductDetailID] ASC )) ON [PRIMARY] GO ALTER TABLE [dbo].[ProductDetails] WITH CHECK ADD CONSTRAINT [FK_ProductDetails_Products] FOREIGN KEY([ProductID]) REFERENCES [dbo].[Products] ([ProductID]) ON UPDATE CASCADE ON DELETE CASCADE GO -- Insert Data into Table USE TestDB GO INSERT INTO Products (ProductID, ProductDesc) SELECT 1, 'Bike' UNION ALL SELECT 2, 'Car' UNION ALL SELECT 3, 'Books' GO INSERT INTO ProductDetails ([ProductDetailID],[ProductID],[Total]) SELECT 1, 1, 200 UNION ALL SELECT 2, 1, 100 UNION ALL SELECT 3, 1, 111 UNION ALL SELECT 4, 2, 200 UNION ALL SELECT 5, 3, 100 UNION ALL SELECT 6, 3, 100 UNION ALL SELECT 7, 3, 200 GO Select Data from Tables -- Selecting Data SELECT * FROM Products SELECT * FROM ProductDetails GO Delete Data from Products Table -- Deleting Data DELETE FROM Products WHERE ProductID = 1 GO Select Data from Tables Again -- Selecting Data SELECT * FROM Products SELECT * FROM ProductDetails GO Clean up Data -- Clean up DROP TABLE ProductDetails DROP TABLE Products GO My friend was confused as there was no delete was firing over ProductsDetails Table still there was a delete happening. The reason was because there is a foreign key created between Products and ProductsDetails Table with the keywords ON DELETE CASCADE. Due to ON DELETE CASCADE whenever is specified when the data from Table A is deleted and if it is referenced in another table using foreign key it will be deleted as well. Workaround 1: Design Changes – 3 Tables Change the design to have more than two tables. Create One Product Mater Table with all the products. It should historically store all the products list in it. No products should be ever removed from it. Add another table called Current Product and it should contain only the table which should be visible in the product catalogue. Another table should be called as ProductHistory table. There should be no use of CASCADE keyword among them. Workaround 2: Design Changes - Column IsVisible You can keep the same two tables. 1) Products and 2) ProductsDetails. Add a column with BIT datatype to it and name it as a IsVisible. Now change your application code to display the catalogue based on this column. There should be no need to delete anything. Workaround 3: Bad Advices (Bad advises begins here) The reason I have said bad advices because these are going to be bad advices for sure. You should make necessary design changes and not use poor workarounds which can damage the system and database integrity further. Here are the examples 1) Do not delete the data – well, this is not a real solution but can give time to implement design changes. 2) Do not have ON CASCADE DELETE – in this case, you will have entry in productsdetails which will have no corresponding product id and later on there will be lots of confusion. 3) Duplicate Data – you can have all the data of the product table move to the product details table and repeat them at each row. Now remove CASCADE code. This will let you delete the product table rows without any issue. There are so many things wrong this suggestion, that I will not even start here. (Bad advises ends here)  Well, did I miss anything? Please help me with your suggestions. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Why 32-bit color EGL configurations fail with EGL_BAD_MATCH on Moto Droid?

    - by Gilead
    I'm trying to figure out why certain EGL configurations cause eglMakeCurrent() call to return EGL_BAD_MATCH on Motorola Droid running Android 2.1u1. This is a full list of hardware-accelerated EGL configurations (those with EGL_CONFIG_CAVEAT == EGL_NONE) as there's a few others with EGL_CONFIG_CAVEAT == EGL_SLOW_CONFIG but those are backed by PixelFlinger 1.2 meaning they're using software renderer. ID: 0 RGB: 8, 8, 8 Alpha: 8 Depth: 24 Stencil: 8 // BAD MATCH ID: 1 RGB: 8, 8, 8 Alpha: 8 Depth: 0 Stencil: 0 // BAD MATCH ID: 2 RGB: 8, 8, 8 Alpha: 8 Depth: 24 Stencil: 8 // BAD MATCH ID: 3 RGB: 8, 8, 8 Alpha: 8 Depth: 24 Stencil: 8 // BAD MATCH ID: 4 RGB: 8, 8, 8 Alpha: 8 Depth: 0 Stencil: 0 // BAD MATCH ID: 5 RGB: 8, 8, 8 Alpha: 8 Depth: 24 Stencil: 8 // BAD MATCH ID: 6 RGB: 5, 6, 5 Alpha: 0 Depth: 24 Stencil: 8 ID: 7 RGB: 5, 6, 5 Alpha: 0 Depth: 0 Stencil: 0 ID: 8 RGB: 5, 6, 5 Alpha: 0 Depth: 24 Stencil: 8 Clearly, all configurations with 32-bit color depth fail and all 16-bit ones are OK but: 1. Why? 2. WHY?! :) 3. How do I tell which ones would fail before actually trying to use them? The code below is as simple as it can get. I put if (v[0] == 6) there to check different configs, normally they're chosen by half-clever config matcher :) private void createSurface(SurfaceHolder holder) { egl = (EGL10)EGLContext.getEGL(); eglDisplay = egl.eglGetDisplay(EGL10.EGL_DEFAULT_DISPLAY); egl.eglInitialize(eglDisplay, null); int[] numConfigs = new int[1]; egl.eglChooseConfig(eglDisplay, new int[] { EGL10.EGL_NONE }, null, 0, numConfigs); EGLConfig[] configs = new EGLConfig[numConfigs[0]]; egl.eglChooseConfig(eglDisplay, new int[] { EGL10.EGL_NONE }, configs, numConfigs[0], numConfigs); int[] v = new int[1]; for (EGLConfig c : configs) { egl.eglGetConfigAttrib(eglDisplay, c, EGL10.EGL_CONFIG_ID, v); if (v[0] == 6) { eglConfig = c; } } eglContext = egl.eglCreateContext(eglDisplay, eglConfig, EGL10.EGL_NO_CONTEXT, null); if (eglContext == null || eglContext == EGL10.EGL_NO_CONTEXT) { throw new RuntimeException("Unable to create EGL context"); } eglSurface = egl.eglCreateWindowSurface(eglDisplay, eglConfig, holder, null); if (eglSurface == null || eglSurface == EGL10.EGL_NO_SURFACE) { throw new RuntimeException("Unable to create EGL surface"); } if (!egl.eglMakeCurrent(eglDisplay, eglSurface, eglSurface, eglContext)) { throw new RuntimeException("Unable to make EGL current"); } gl = (GL10)eglContext.getGL(); }

    Read the article

  • How to Global onRouteRequest directly to onBadRequest?

    - by virtualeyes
    EDIT Came up with this to sanitize URI date params prior to passing off to Play router val ymdMatcher = "\\d{8}".r // matcher for yyyyMMdd URI param val ymdFormat = org.joda.time.format.DateTimeFormat.forPattern("yyyyMMdd") def ymd2Date(ymd: String) = ymdFormat.parseDateTime(ymd) override def onRouteRequest(r: RequestHeader): Option[Handler] = { import play.api.i18n.Messages ymdMatcher.findFirstIn(r.uri) map{ ymd=> try { ymd2Date( ymd); super.onRouteRequest(r) } catch { case e:Exception => // kick to "bad" action handler on invalid date Some(controllers.Application.bad(Messages("bad.date.format"))) } } getOrElse(super.onRouteRequest(r)) } ORIGINAL Let's say I want to return a BadRequest result type for all /foo URIs: override def onBadRequest(r: RequestHeader, error: String) = { BadRequest("Bad Request: " + error) } override def onRouteRequest(r: RequestHeader): Option[Handler] = { if(r.uri.startsWith("/foo") onBadRequest("go away") else super.onRouteRequest(r) } Of course does not work, since the expected return type is Option[play.api.mvc.Handler] What's the idiomatic way to deal with this, create a default Application controller method to handle filtered bad requests? Ideally, since I know in onRouteRequest that /foo is in fact a BadRequest, I'd like to call onBadRequest directly. Should note that this is a contrived example, am actually verifying a URI yyyyMMdd date param, and BadRequest-ing if it does not parse to a JodaTime instance -- basically a catch-all filter to sanitize a given date param rather than handling on every single controller method call, not to mention, avoiding cluttering up application log with useless stack traces re: invalid date parse conversions (have several MBs of these junk trace entries accruing daily due to users pointlessly manipulating the uri date in attempts to get at paid subscriber content)

    Read the article

  • Microsoft T-SQL Counting Consecutive Records

    - by JeffW
    Problem: From the most current day per person, count the number of consecutive days that each person has received 0 points for being good. Sample data to work from : Date Name Points 2010-05-07 Jane 0 2010-05-06 Jane 1 2010-05-07 John 0 2010-05-06 John 0 2010-05-05 John 0 2010-05-04 John 0 2010-05-03 John 1 2010-05-02 John 1 2010-05-01 John 0 Expected answer: Jane was bad on 5/7 but good the day before that. So Jane was only bad 1 day in a row most recently. John was bad on 5/7, again on 5/6, 5/5 and 5/4. He was good on 5/3. So John was bad the last 4 days in a row. Code to create sample data: IF OBJECT_ID('tempdb..#z') IS NOT NULL BEGIN DROP TABLE #z END select getdate() as Date,'John' as Name,0 as Points into #z insert into #z values(getdate()-1,'John',0) insert into #z values(getdate()-2,'John',0) insert into #z values(getdate()-3,'John',0) insert into #z values(getdate()-4,'John',1) insert into #z values(getdate(),'Jane',0) insert into #z values(getdate()-1,'Jane',1) select * from #z order by name,date desc

    Read the article

  • perl Getopt::Long madness

    - by ennuikiller
    The following code works in one script yet in another only works if a specify the "--" end of options flag before specifying an option: my $opt; GetOptions( 'help|h' => sub { usage("you want help?? hahaha, hopefully your not serious!!"); }, 'file|f=s' => \$opt->{FILE}, 'report|r' => \$opt->{REPORT}, ) or usage("Bad Options"); In other words, the same code words in good.pl and bad.pl like so: good.pl -f bad.pl -- -f If I try bad.pl -f I get "unknown option:f" Anyone have any clue as to what can cause this behavior? Thanks in advnace!

    Read the article

  • iphone localization

    - by hardik
    hello all when i run the command to generate Localizable.string file from my terminal it says me bad entry in to the classes file the file gets generated but it has no entry in it infact it should have entry in it. Here is what i am running in my terminal but somehow it is not happening please guide me need to solve this Last login: Mon Jun 7 18:02:09 on ttys000 comp10:~ admin$ cd .. comp10:Users admin$ cd .. comp10:/ admin$ cd /Users/admin/Desktop/localisationwithcode comp10:localisationwithcode admin$ sudo usage: sudo -K | -L | -V | -h | -k | -l | -v usage: sudo [-HPSb] [-p prompt] [-u username|#uid] { -e file [...] | -i | -s | <command> } comp10:localisationwithcode admin$ genstrings Classes/*.m Bad entry in file Classes/localisationwithcodeViewController.m (line = 35): Argument is not a literal string. Bad entry in file Classes/localisationwithcodeViewController.m (line = 36): Argument is not a literal string. Bad entry in file Classes/localisationwithcodeViewController.m (line = 37): Argument is not a literal string. Bad entry in file Classes/localisationwithcodeViewController.m (line = 38): Argument is not a literal string. 2010-06-07 18:04:45.047 genstrings[3851:10b] _CFGetHostUUIDString: unable to determine UUID for host. Error: 35 comp10:localisationwithcode admin$

    Read the article

  • Why does the same Getopt::Long code work differently in different programs?

    - by ennuikiller
    The following code works in one script yet in another only works if a specify the -- end of options flag before specifying an option: my $opt; GetOptions( 'help|h' => sub { usage("you want help?? hahaha, hopefully you're not serious!!"); }, 'file|f=s' => \$opt->{FILE}, 'report|r' => \$opt->{REPORT}, ) or usage("Bad Options"); In other words, the same code words in good.pl and bad.pl like so: good.pl -f bad.pl -- -f If I try bad.pl -f I get unknown option:f. Anyone have any clue as to what can cause this behavior? Thanks in advance! I've solved this..... and btw it's a VERY clear question (so why the downvotes)? I'll state it again: What would cause the identical GetOptions block to work in these 2 ways: "good.pl -f" "bad.pl -- -f" see how clear? Maybe you guys should think about it as if it were a TEST!

    Read the article

  • Excel VBA Application.OnTime. I think its a bad idea to use this... thoughts either way?

    - by FinancialRadDeveloper
    I have a number of users I support that are asking for things to happen automatically ( well more automagically but that's another point!). One want events to happen every 120 secs ( see my other question ) and also another wants 1 thing to happen say at 5pm each business day. This has to be on the Excel sheet so therefore VBA as addins etc will be a no no, as it needs to be self contained. I have a big dislike of using Application.OnTime I think its dangerous and unreliable, what does everyone else think?

    Read the article

  • Is this a good or bad way to use constructor chaining? (... to allow for testing).

    - by panamack
    My motivation for chaining my class constructors here is so that I have a default constructor for mainstream use by my application and a second that allows me to inject a mock and a stub. It just seems a bit ugly 'new'-ing things in the ":this(...)" call and counter-intuitive calling a parametrized constructor from a default constructor , I wondered what other people would do here? (FYI - SystemWrapper) using SystemWrapper; public class MyDirectoryWorker{ // SystemWrapper interface allows for stub of sealed .Net class. private IDirectoryInfoWrap dirInf; private FileSystemWatcher watcher; public MyDirectoryWorker() : this( new DirectoryInfoWrap(new DirectoryInfo(MyDirPath)), new FileSystemWatcher()) { } public MyDirectoryWorker(IDirectoryInfoWrap dirInf, FileSystemWatcher watcher) { this.dirInf = dirInf; if(!dirInf.Exists){ dirInf.Create(); } this.watcher = watcher; watcher.Path = dirInf.FullName; watcher.NotifyFilter = NotifyFilters.FileName; watcher.Created += new FileSystemEventHandler(watcher_Created); watcher.Deleted += new FileSystemEventHandler(watcher_Deleted); watcher.Renamed += new RenamedEventHandler(watcher_Renamed); watcher.EnableRaisingEvents = true; } public static string MyDirPath{get{return Settings.Default.MyDefaultDirPath;}} // etc... }

    Read the article

  • Counting viable sublist lengths from an array.

    - by Ben B.
    This is for a genetic algorithm fitness function, so it is important I can do this as efficiently as possible, as it will be repeated over and over. Lets say there is a function foo(int[] array) that returns true if the array is a "good" array and false if the array is a "bad" array. What makes it good or bad does not matter here. Given the full array [1,6,8,9,5,11,45,16,9], lets say that subarray [1,6,8] is a "good" array and [9,5,11,45] is a "good" array. Furthermore [5,11,45,16,9] is a "good" array, and also the longest "good" subarray. Notice that while [9,5,11,45] is a "good" array, and [5,11,45,16,9] is a "good" array, [9,5,11,45,16,9] is a "bad" array. We wants the length counts of all "good" arrays, but not subarrays of "good" arrays. Furthermore, as described above, a "good" array might begin in the middle of another "good" array, but the combination of the two might be a "bad" array.

    Read the article

  • Ticket Bayesian(or something else) Categorization

    - by vinnitu
    Hi. I search solution for ticket managment system. Do you know any commercial offers? For now I have only own dev prjects with using dspam library. Maybe I am wrong use it but it show bad results. My idea was divide all prerated ticket in 2 group: spam (it is my category) and rest to (ham - all not the same with this category). After that i trained my dspam. After I redivide all tickets in new groups (for next category) and teach dspam again (with new user - by category name)... And it works bad... My thoughs about is - bad data base tickes (i mean not correct tagging before) - bad my algorythm (it is more posible) Please give me a direction to go forward. Thanks. I am integesting any idea and suggestion. Thanks again.

    Read the article

  • Bad performance function in PHP. With large files memory blows up! How can I refactor?

    - by André
    Hi I have a function that strips out lines from files. I'm handling with large files(more than 100Mb). I have the PHP Memory with 256MB but the function that handles with the strip out of lines blows up with a 100MB CSV File. What the function must do is this: Originally I have the CSV like: Copyright (c) 2007 MaxMind LLC. All Rights Reserved. locId,country,region,city,postalCode,latitude,longitude,metroCode,areaCode 1,"O1","","","",0.0000,0.0000,, 2,"AP","","","",35.0000,105.0000,, 3,"EU","","","",47.0000,8.0000,, 4,"AD","","","",42.5000,1.5000,, 5,"AE","","","",24.0000,54.0000,, 6,"AF","","","",33.0000,65.0000,, 7,"AG","","","",17.0500,-61.8000,, 8,"AI","","","",18.2500,-63.1667,, 9,"AL","","","",41.0000,20.0000,, When I pass the CSV file to this function I got: locId,country,region,city,postalCode,latitude,longitude,metroCode,areaCode 1,"O1","","","",0.0000,0.0000,, 2,"AP","","","",35.0000,105.0000,, 3,"EU","","","",47.0000,8.0000,, 4,"AD","","","",42.5000,1.5000,, 5,"AE","","","",24.0000,54.0000,, 6,"AF","","","",33.0000,65.0000,, 7,"AG","","","",17.0500,-61.8000,, 8,"AI","","","",18.2500,-63.1667,, 9,"AL","","","",41.0000,20.0000,, It only strips out the first line, nothing more. The problem is the performance of this function with large files, it blows up the memory. The function is: public function deleteLine($line_no, $csvFileName) { // this function strips a specific line from a file // if a line is stripped, functions returns True else false // // e.g. // deleteLine(-1, xyz.csv); // strip last line // deleteLine(1, xyz.csv); // strip first line // Assigna o nome do ficheiro $filename = $csvFileName; $strip_return=FALSE; $data=file($filename); $pipe=fopen($filename,'w'); $size=count($data); if($line_no==-1) $skip=$size-1; else $skip=$line_no-1; for($line=0;$line<$size;$line++) if($line!=$skip) fputs($pipe,$data[$line]); else $strip_return=TRUE; return $strip_return; } It is possible to refactor this function to not blow up with the 256MB PHP Memory? Give me some clues. Best Regards,

    Read the article

  • default file/folder security permissions sbs 2003

    - by Floris
    I have lost all file/folder security permissions of a SBS 2003 installation and was wondering is there some command I can run to restore system file/folder permissions to there default values. I lost the permissions when I had boot error and had to restore the primary boot sector from backup primary boot sector and had to tun fixboot to get the system booting again. Many Thanks Floris

    Read the article

  • Finding out the windows group by virtue of which a user is able to access a database in sql server?

    - by Raghu Dodda
    There is a SQL Server 2005 database with mixed-mode authentication. Among others, we have the following logins on the server: our-domain\developers-group-1, and our-domain\developers-group-2 which are AD groups. The our-domain\developer-group-2 is added to the sysadmin role on the server, by virture of which all domain users of that group can access any database as SQL Server implictly maps the sysadmin role to the dbo user in each database. There are two users our-domain\good-user and our-domain\bad-user The issue is the following: Both the good-user and the bad-user have the exact same AD group memberships. They are both members of our-domain\developers-group-1 and our-domain\developers-group-2. The good-user is able to access all the databases, and the bad-user is not. The bad-user is able to login, but he is unable access any databases. By the way, I am the good-user. How do I go about finding out why? Here's what I tried so far: When I do print current_user, I get dbo When I do print system_user, I get my-domain\good-user When I do select * from fn_my_permissions(NULL, 'SERVER'), I see permissions. But if do execute as user='my-domain\good-user'; select * from fn_my_permissions(NULL, 'SERVER'), I dont see any permisisons. And When I do, execute as user='my-domain\bad-user'; select * from fn_my_permissions(NULL, 'SERVER'), I dont see any permisisons. Also, I was wondering if there is a sql command that will tell me, "hey! the current database user is able to access this database because he is a member such-and-such ad-group, which is a login that is mapped to such-and-such user in this database".

    Read the article

  • How do I stop and repair a RAID 5 array that has failed and has I/O pending?

    - by Ben Hymers
    The short version: I have a failed RAID 5 array which has a bunch of processes hung waiting on I/O operations on it; how can I recover from this? The long version: Yesterday I noticed Samba access was being very sporadic; accessing the server's shares from Windows would randomly lock up explorer completely after clicking on one or two directories. I assumed it was Windows being a pain and left it. Today the problem is the same, so I did a little digging; the first thing I noticed was that running ps aux | grep smbd gives a lot of lines like this: ben 969 0.0 0.2 96088 4128 ? D 18:21 0:00 smbd -F root 1708 0.0 0.2 93468 4748 ? Ss 18:44 0:00 smbd -F root 1711 0.0 0.0 93468 1364 ? S 18:44 0:00 smbd -F ben 3148 0.0 0.2 96052 4160 ? D Mar07 0:00 smbd -F ... There are a lot of processes stuck in the "D" state. Running ps aux | grep " D" shows up some other processes including my nightly backup script, all of which need to access the volume mounted on my RAID array at some point. After some googling, I found that it might be down to the RAID array failing, so I checked /proc/mdstat, which shows this: ben@jack:~$ cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdb1[3](F) sdc1[1] sdd1[2] 2930271872 blocks level 5, 64k chunk, algorithm 2 [3/2] [_UU] unused devices: <none> And running mdadm --detail /dev/md0 gives this: ben@jack:~$ sudo mdadm --detail /dev/md0 /dev/md0: Version : 00.90 Creation Time : Sat Oct 31 20:53:10 2009 Raid Level : raid5 Array Size : 2930271872 (2794.53 GiB 3000.60 GB) Used Dev Size : 1465135936 (1397.26 GiB 1500.30 GB) Raid Devices : 3 Total Devices : 3 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Mon Mar 7 03:06:35 2011 State : active, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 1 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K UUID : f114711a:c770de54:c8276759:b34deaa0 Events : 0.208245 Number Major Minor RaidDevice State 3 8 17 0 faulty spare rebuilding /dev/sdb1 1 8 33 1 active sync /dev/sdc1 2 8 49 2 active sync /dev/sdd1 I believe this says that sdb1 has failed, and so the array is running with two drives out of three 'up'. Some advice I found said to check /var/log/messages for notices of failures, and sure enough there are plenty: ben@jack:~$ grep sdb /var/log/messages ... Mar 7 03:06:35 jack kernel: [4525155.384937] md/raid:md0: read error NOT corrected!! (sector 400644912 on sdb1). Mar 7 03:06:35 jack kernel: [4525155.389686] md/raid:md0: read error not correctable (sector 400644920 on sdb1). Mar 7 03:06:35 jack kernel: [4525155.389686] md/raid:md0: read error not correctable (sector 400644928 on sdb1). Mar 7 03:06:35 jack kernel: [4525155.389688] md/raid:md0: read error not correctable (sector 400644936 on sdb1). Mar 7 03:06:56 jack kernel: [4525176.231603] sd 0:0:1:0: [sdb] Unhandled sense code Mar 7 03:06:56 jack kernel: [4525176.231605] sd 0:0:1:0: [sdb] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Mar 7 03:06:56 jack kernel: [4525176.231608] sd 0:0:1:0: [sdb] Sense Key : Medium Error [current] [descriptor] Mar 7 03:06:56 jack kernel: [4525176.231623] sd 0:0:1:0: [sdb] Add. Sense: Unrecovered read error - auto reallocate failed Mar 7 03:06:56 jack kernel: [4525176.231627] sd 0:0:1:0: [sdb] CDB: Read(10): 28 00 17 e1 5f bf 00 01 00 00 To me it is clear that device sdb has failed, and I need to stop the array, shutdown, replace it, reboot, then repair the array, bring it back up and mount the filesystem. I cannot hot-swap a replacement drive in, and don't want to leave the array running in a degraded state. I believe I am supposed to unmount the filesystem before stopping the array, but that is failing, and that is where I'm stuck now: ben@jack:~$ sudo umount /storage umount: /storage: device is busy. (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1)) It is indeed busy; there are some 30 or 40 processes waiting on I/O. What should I do? Should I kill all these processes and try again? Is that a wise move when they are 'uninterruptable'? What would happen if I tried to reboot? Please let me know what you think I should do. And please ask if you need any extra information to diagnose the problem or to help!

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >