Search Results

Search found 3935 results on 158 pages for 'extended procedures'.

Page 137/158 | < Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >

  • file corruption on read/write 2.6.32-22-server (happens across many kernels)

    - by Jonathan
    Hi Guys, I'm having an issue where after the server has been up for a period of time (~week/few days) the server will start reading corrupt data. For instance when I run a sha1sum of a file after a fresh boot it remains the same. However after a while I will start to get segfaults and from then on whenever I read this file I get a different sha1sum. I've checked S.M.A.R.T with long tests and I've run an extended memtest86+(12 passes) My lspci is as follows: 00:00.0 Host bridge: Advanced Micro Devices [AMD] RS780 Host Bridge 00:01.0 PCI bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (int gfx) 00:06.0 PCI bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (PCIE port 2) 00:07.0 PCI bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (PCIE port 3) 00:11.0 SATA controller: ATI Technologies Inc SB700/SB800 SATA Controller [AHCI mode] 00:12.0 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI0 Controller 00:12.1 USB Controller: ATI Technologies Inc SB700 USB OHCI1 Controller 00:12.2 USB Controller: ATI Technologies Inc SB700/SB800 USB EHCI Controller 00:13.0 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI0 Controller 00:13.1 USB Controller: ATI Technologies Inc SB700 USB OHCI1 Controller 00:13.2 USB Controller: ATI Technologies Inc SB700/SB800 USB EHCI Controller 00:14.0 SMBus: ATI Technologies Inc SBx00 SMBus Controller (rev 3c) 00:14.1 IDE interface: ATI Technologies Inc SB700/SB800 IDE Controller 00:14.3 ISA bridge: ATI Technologies Inc SB700/SB800 LPC host controller 00:14.4 PCI bridge: ATI Technologies Inc SBx00 PCI to PCI Bridge 00:14.5 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI2 Controller 00:18.0 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] HyperTransport Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Miscellaneous Control 00:18.4 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Link Control 01:05.0 VGA compatible controller: ATI Technologies Inc Radeon HD 3300 Graphics 01:05.1 Audio device: ATI Technologies Inc RS780 Azalia controller 02:00.0 Ethernet controller: Atheros Communications Atheros AR8121/AR8113/AR8114 PCI-E Ethernet Controller (rev b0) 03:00.0 FireWire (IEEE 1394): VIA Technologies, Inc. Device 3403 I could really use some help on this, do you have any idea what could cause this? It's really frustrating me as it seems to trigger entirely randomly and will not go away until I reboot. I'm also use KVM for virtualization as well as MD for software RAID on this server and the processor is a Phenom II X4 965. I don't believe it's the software raid however as this affects files also hosted on non-raid partitions so I don't know.

    Read the article

  • What is it that kills laptop batteries?

    - by Mala
    There are many superstitions on what you must never do lest your battery become worthless - and by worthless I mean hold about 16 - 24 seconds of charge. This has happened to every laptop I have ever owned, and I just got a new one, so please help me sort out fact from fiction. Here are some of the things I've heard: Do not keep your laptop fully charged. You must run it completely down every so often. Do not use your laptop plugged in to the wall. Only plug it in when it needs charge. If you will not be using your laptop for a long period of time, don't leave it at full charge. Do not leave your laptop running 24/7. The first two I know to be complete fiction: this was true of old batteries such as you might have had in an iPod in 2003, but modern batteries function better when kept at or near full charge. Devices even have circuitry to prevent you from completely depleting your battery, as this is dangerous. The third point sounds probable, and I'd be interested to know if it was true. However, it doesn't really apply to me because I'm not really the type to leave my laptop alone for a day, much less a "long period of time" The fourth seems most likely of the above, but only because of causality: I have always done this, and my batteries have always crapped out on me. I've generally treated a laptop like a desktop with a battery backup, and that I can move from one room to another if necessary. The fact that my batteries tend to last less than 30 seconds has further entrenched this behavior. Should I be trying to break this habit? Are there any other things that ruin laptop batteries? I love that I can actually use my new laptop unplugged :) I'd like to keep it that way. Update: Additional question: If the computer will be used for an extended period of time plugged in, does it make sense to remove the battery first? Update 2: I know people with laptops older than mine, who actively use their laptops as much as I do, and their batteries still hold about an hours' charge while mine holds less than 30 seconds, hence my belief that something I'm doing kills them.

    Read the article

  • Keeping track of business rules within IT department?

    - by evaldas-alexander
    I am looking for the best way to keep track of the business rules for both developers and everybody else (support staff / management) in a startup enviroment. The challenge is that our business model requires quite a lot of different business rules, which are created pretty much on the fly and evolving organically after that. After running this project for 3+ years, we have so many of such rules that often the only way to be sure about what the application is supposed to do in a certain situation is to go find the module responsible for that process and analyze its code and comments. That is all fine as long as you have one single developer who created the entire application from the scratch, but every new developer needs to go over pretty much entire codebase in order to understand how the application works. Even bigger problem is that non technical employees don't even have that option and therefore are forced to ask me pretty much every day how some certain case would be handled by the application. Quick example - we only start charging for our customer campaigns once they have been active for at least 72 hours, but at the same time we stop creating invoices for campaigns that belong to insolvent accounts and close such accounts within a month of the first failed charge. That does not apply to accounts that are set to "non-chargeable" which most commonly belongs to us since we are using the service ourselves. The invoices are created on the 1st of each month and include charges from the previous month + any current balance that the account might have. However, some customers are charged only 4 days after their invoice has been generated due to issues with their billing department. In addition to that, invoices are also created when customer deactivates his campaign, but that can only be done once the campaign is not longer under mandatory 6 month contract, unless account manager approves early deactivation. I know, that's quite a lot of rules that need to be taken into account when answering a question "when do we bill our customers", but actually I could still append an asterisk at the end of each sentence in order to disclose some rare exceptions. Of course, it would be easiest just to keep the business rules to the minimum, but we need to adapt to changing marketplace - i.e. less than a year ago we had no contracts whatsoever. One idea that I had so far was a simplistic wiki with categories corresponding to areas such as "Account activation", "Invoicing", "Collection procedures" and so on. Another idea would be to have giant interactive flowchart showing the entire customer "life cycle" from prospecting to account deactivation. What are your experiences / suggestions?

    Read the article

  • Troubleshooting an NFS server hanging after authenticated mount request

    - by Christoph
    I need some advice on troubleshooting an NFS server problem on Scientific Linux (RHEL) 6.1. The log on the server shows that an authenticated mount request was made: Jan 13 16:30:02 ??? rpc.mountd[3996]: authenticated mount request from ????:784 for /shared-storage/cm/shared (/shared-storage/cm/shared) But after that, it does not continue. On the client, it is also hanging. The interesting thing now is that I have two NFS servers, which should be identical, and the one is working perfectly, but the other exhibits the above mentioned behaviour. The problem is also not completely persistent, i. e. sometimes the mount request succeeds. I assume that the problem must be related to the server rather than to the client, because it is working perfectly on the other server. My question is where I should search the problem. I have already re-created the exports using exportfs -r, I have restarted the NFS server, I have compared the rpcinfo outputs of both server - no success. The problem even survives a reboot. Any other ideas are appreciated. As answer to Tim's question: I have sporadically the following in dmesg, but do not know whether it is related e1000e 0000:0c:00.0: eth4: Detected Hardware Unit Hang: TDH <24> TDT <25> next_to_use <25> next_to_clean <24> buffer_info[next_to_clean]: time_stamp <1c3d12940> next_to_watch <24> jiffies <1c3d12940> next_to_watch.status <0> MAC Status <80383> PHY Status <792d> PHY 1000BASE-T Status <7800> PHY Extended Status <3000> PCI Status <10> Further edit: The problem above does not occur on the machine that is working, so it probably is related. Again an edit: The error is not on the (software) device that is used for NFS, but on another one. The NFS mount also does not trigger the message.

    Read the article

  • How to drop null values with a native Oracle XML DB Web Service?

    - by gfjr
    I am using native Oracle XML DB Web Services (using a PL/SQL function with a web service). I want to drop null values (put nothing in the output (no XML element)). It's working with Oracle 11.2.0.1.0 but not with Oracle 11.2.0.3.0. Just to clarify... I don't want to consume a webservice with PL/SQL, I want to publish my PL/SQL packages/procedures/functions as a web service! Hope someone can help me. Thank you. In this example is the column "country" null. Oracle 11.2.0.1.0 (this is what I want): <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Body> <GET_PERSONOutput xmlns="http://xmlns.oracle.com/orawsv/TESTSTUFF/GET_PERSON"> <RETURN> <PERSON> <PERSON_ID>3</PERSON_ID> <FIRST_NAME>Harry</FIRST_NAME> <LAST_NAME>Potter</LAST_NAME> </PERSON> </RETURN> </GET_PERSONOutput> </soap:Body> </soap:Envelope> Oracle 11.2.0.3.0: <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Body> <GET_PERSONOutput xmlns="http://xmlns.oracle.com/orawsv/TESTSTUFF/GET_PERSON"> <RETURN> <PERSON> <PERSON_ID>3</PERSON_ID> <FIRST_NAME>Harry</FIRST_NAME> <LAST_NAME>Potter</LAST_NAME> <COUNTRY/> </PERSON> </RETURN> </GET_PERSONOutput> </soap:Body> </soap:Envelope>

    Read the article

  • Operative systems on SD cards

    - by HisDudeness
    I was getting some wild ideas the last days, like putting some operative systems into SD cards rather than on my hard drive. I'll go further into details now and explain what lead me to consider this probably abominable decision. I am on a laptop (that means I have a native SD-card reader) which is currently running a cross-distro setup, with a bunch of Linux systems (placed in dedicated ext4 logical partitions into a huge extended one) regulated by an unique GRUB. Since today, my laptop haven't even seen any Windows system with binoculars. I was thinking about placing all the os part of my setup into a Secure Digital to save all my 500 Gb Hard Drive for documents, music, videos and so on, and being able to just remove the SD and boot my system into another computer too, as well as having the possibility of booting other systems into mine by just plugging in another SD, without having to keep it constantly placed in my PC. Also, in the remote case in the near future I just wanted to boot Windows 8 in it, I read it causes major boot incompatibility issues with other systems by needing a digital signature in order for them to start. By having it in a removable drive, I could just get rid of it when I'm needing him and switch its card with Linux one, and so not having any obstacles to their boot. Now, my questions are: I know unlikely traditional rotating disk drives, integrated circuits ones have a limited lifespan in terms of cluster rewriting. Is it an obstacle to that kind of usage? I mean, some Ultrabooks are using SSD now, is it the same issue, or there are some differences between Solid State Drives and Secure Digitals in that sense? Maybe having them to store system files which are in fixed positions (making the even-usage of cluster technology useless) constantly being re-read and updated and similar things just gets them soon unserviceable, do it? Second question: are all motherboards and BIOSes able to boot from SDs just like they are from USB pen drives (I mean, provided card reader is USB-connected, isn't it)? Or can't bootloaders like GRUB be installed on SDs working? If they can't, is it a solution installing GRUB to MBR and making boot option pointing to SD? Will it work? Are there any other problems to installing OSs on a Secure Digital?

    Read the article

  • Dell PowerEdge R720 - Corrupted RAID

    - by BT643
    Apologies in advance for the lengthy question. We have a Dell PowerEdge R720 server with: 2 x 136GB SAS drives in RAID 1 for the OS (Ubuntu Server 12.04) 6 x 3TB SATA drives in RAID 5 for data A few days ago we were getting errors when trying to access files on the large RAID 5 partition. We rebooted the server and got a message about the raid controller has found a foriegn config. We've had this before, and just needed to use Dell's RAID configuration utility to import foreign config on the RAID. Last time this worked, but this time, it started doing a disk check then we got this: FSCK has returned the following: "/dev/sdb1 inode 364738 has a bad extended attribute block 7 /dev/sdb1 unexpected inconsistency run fsck manually (i.e without -a or -p options) MOUNTALL fsck /ourdatapartition [1019] terminated with status 4 MOUNTALL filesystem has errors /ourdatapartition errors where found while checking the disk drive for /ourdatapartition Press F to fix errors, I to Ignore or M for Manual Recovery" We pressed F to try and fix the errors, but it eventually errored with: Inode 275841084, i_blocks is 167080, should be 0. Fix? yes Inode 275841141 has an invalid extend node (blk 2206761006, lblk 0) Clear? yes Inode 275841141, i_blocks is 227872, should be 0. Fix? yes Inode 275842303 has an invalid extend node (blk 2206760975, lblk 0) Clear? yes .... Error storing directory block information (inode=275906766, block=0, num=2699516178): Memory allocation failed /dev/sdb1: ***** FILE SYSTEM WAS MODIFIED ***** e2fsck: aborted /dev/sdb1: ***** FILE SYSTEM WAS MODIFIED ***** mountall: fsck /ourdatapartition [1286] terminated with status 9 mountall: Unrecoverable fsck error: /ourdatapartition We noticed one of the drive lights was not lit at all, and thought this may have failed and be the problem. We replaced the drive with a spare, and tried "F" to repair it again, but we keep just getting the same error as above. In the RAID configuration utility, all drives show as "online" and "optimal". We do have this data on another replicated server, so we're not worried about "recovering" anything, we just want to get the system back online asap. The server has 64 or 32GB memory, can't remember off the top of my head, but either way, with a 14TB RAID, I think it may still not be enough. Thanks EDIT - I checked the memory usage while fsck was running as suggested and after 2 or 3 minutes, it looked like this, using up nearly all of our servers memory: When it failed after 5 minutes or so with the error in my post, the memory immediately freed up again:

    Read the article

  • Can't Move Windows to 2nd Monitor without Left Mouse and Cntl Key

    - by John C
    I have 2 very frustrating problems that maybe someone can help me with: I have 2 monitors (different sizes and resolutions) setup with the "Extended" monitor Win7 setup. My problem is this = I can not "move" a window from my Primary Monitor (larger and higher resolution on right side in front of me) to my Secondary 2nd monitor (smaller and lower resolution) with just selecting the title bar with the left mouse button and dragging it to the left. Windows 7 "snaps" it back to the left Primary Monitor when the window is physically in the 2nd window area as I'm holding the left mouse button. I can prevent this problem - by holding down the Cntl Key with the Left Mouse button, but this is extremely annoying to me. Also I typically "lose" focus if I try typing input on the 2nd monitor. Typing is erratic with regard to keystroke accuracy from my keyboard translated into input on the 2nd screen. No problem with typing input on the primary left monitor. I find this extremely annoying in Windows 7 and turning off the "snap" feature via the Control panel does NOT work for me. Win7 stubbornly refuses to move my selected window to my 2nd monitor without me "forcing" Win7 to do this with the Cntrl Key. Please tell me this is not a Win7 feature. Also on my system - Windows Key + Shift, Left arrow Key (pressed together) or the same combo with The Right arrow Key - don't do anything whatsoever. Widows Key with "+" however does maximize current window across both monitors, and I can "restore" it with Windows Key and "-" back to original monitor and size. I have tried various solutions including changing the resolutions of one or both of my monitors and sometimes "temporarily helps" but reverts back to the problem. Also if I swap the logical (not physical) layout so that I tell Win7 the monitors are setup in a reserved situation (Large monitor on the left, and small on the right) - this also sometimes helps for awhile - and is very strange and awkward to work with "backwards". But all of these solutions stop working. The only solution that consistently works for "moving" the screens is to hold the Cntrl Key down as I'm moving window with the left mouse selected on the title bar. Even that however, doesn't prevent the loss of typing focus for me on the 2nd monitor - while at the same time the typing on the 1st monitor is fine. Any help on moving my window screens from one monitor on my 2nd monitor without having to press the Cntrl key while holding down my left mouse button with be appreciated. Also any help on gaining typing "focus" into my 2nd screen with be helpful too. Thanks - John

    Read the article

  • Why Photoshop CS5's photomerge's result immediately disappear?

    - by koiyu
    I have a bunch of JPG-files which I want to stitch together with Photoshop's Photomerge function. I choose File → Automate → Photomerge... and browse for the files. Photoshop opens the files and starts analyzing. I see the process bar filling and different phases are mentioned on the process bar. Nothing weird there. When the merging is done (and if I don't blink my eyes), I can see layers-palette is populated with the chosen files and, by quickly judging from the layer thumbnails, they're properly aligned. Sometimes the image window itself can be seen, but not always. Problem is that the layers and the image disappear in a flash. There is no error message. Everything is like prior starting the photomerge. No file has been changed. I could continue to use Photoshop normally. This is what I've tried so far: Loaded folder which has 38 JPG images, 4272 x 2848 and ˜ 5 megabytes per file Loaded the same files, but chose Use Files instead of Use Folder in the photomerge's window Loaded 19 JPG images, 4272 x 2848 and ˜ 5 megabytes per file Loaded 10 JPG images, ⇑ see above Loaded 5 JPG images, see above Loaded 3 JPG images, see above Scaled the images to 2256 x 1504 and ˜< 1 megabytes per file Loaded in a set of 38, 19, 10, 5, 3 Following steps are tested with these smaller files and with a set of 5 images Read Adobe's forums and reduced the amount of RAM Photoshop uses gradually from ˜ 80 % to 50 % (though I didn't understand the logic behind this) Would've reduced cache tile size to 128K, but it was set so already Disabled OpenGL Scaled the images to 800 x 533 and ˜ 100 kilobytes per file, loaded a set of 5 Read more unanswered threads around the internet In between each test I closed and reopened Photoshop. This is the first time I've even tried using photomerge. Am I doing something wrong? How can I locate what is the problem? How do I fix this? Photoshop is 64 bit Extended CS5 version. I'm on a mid-2010 quad-core (i5) iMac with up-to-date Mac OS X 10.6.6. Edit: Weird. First loading the images into one file via File → Scripts → Load Files into Stack… and then using Edit → Auto-Align Layers…, which, effectively, is the same as photomerge (even the dialog looks kind of the same), works! Even with the original JPGs without any issues. This doesn't fix photomerge, though.

    Read the article

  • SEO Help with Pages Indexed by Google

    - by Joe Majewski
    I'm working on optimizing my site for Google's search engine, and lately I've noticed that when doing a "site:www.joemajewski.com" query, I get results for pages that shouldn't be indexed at all. Let's take a look at this page, for example: http://www.joemajewski.com/wow/profile.php?id=3 I created my own CMS, and this is simply a breakdown of user id #3's statistics, which I noticed is indexed by Google, although it shouldn't be. I understand that it takes some time before Google's results reflect accurately on my site's content, but this has been improperly indexed for nearly six months now. Here are the precautions that I have taken: My robots.txt file has a line like this: Disallow: /wow/profile.php* When running the url through Google Webmaster Tools, it indicates that I did, indeed, correctly create the disallow command. It did state, however, that a page that doesn't get crawled may still get displayed in the search results if it's being linked to. Thus, I took one more precaution. In the source code I included the following meta data: <meta name="robots" content="noindex,follow" /> I am assuming that follow means to use the page when calculating PageRank, etc, and the noindex tells Google to not display the page in the search results. This page, profile.php, is used to take the $_GET['id'] and find the corresponding registered user. It displays a bit of information about that user, but is in no way relevant enough to warrant a display in the search results, so that is why I am trying to stop Google from indexing it. This is not the only page Google is indexing that I would like removed. I also have a WordPress blog, and there are many category pages, tag pages, and archive pages that I would like removed, and am doing the same procedures to attempt to remove them. Can someone explain how to get pages removed from Google's search results, and possibly some criteria that should help determine what types of pages that I don't want indexed. In terms of my WordPress blog, the only pages that I truly want indexed are my articles. Everything else I have tried to block, with little luck from Google. Can someone also explain why it's bad to have pages indexed that don't provide any new or relevant content, such as pages for WordPress tags or categories, which are clearly never going to receive traffic from Google. Thanks!

    Read the article

  • How to create NTFS partition in Linux to install Windows 7 from USB?

    - by Michal Stefanow
    I messed up with my computer and need help. Generally: install Windows 7 from USB. Problem: "setup was unable to create a new system partition" When first attempt to install Windows 7 failed I tried Linux live USB, installed distro to HDD, and erased all the existing partitions. Current state (fdisk -l): [writing from other computer so no copy and paste] /dev/sda1 305GB Linux /dev/sda2 7GB Extended /dev/sda5 7GB Linux Swam / Solaris To create a new, NTFS partition: fdisk /dev/sda n (for new) p (for primary) 3 (for partintion number) "No free sectors available" All the HDD was formatted couple of minutes before so there is a lot of free space but how to resize a parition? I cannot find an option for resizing in man fdisk. Some people say I should use gparted but my distro doesn't not contain this package. And my distro doesn't support wireless drivers so I have serious problems with downloading stuff. I tried also using cfdisk but any command results in: "cfdisk bad primary partition 1 partition ends in the final partial cylinder" I tried also removing partition 1 and then creating a new one (so there is no "no free sectors"). I'm receiving a warning: "Re-reading the partition table failed with error 16: Device or resource busy. The kernel still uses the old table. The new table will be used at the next reboot." After restating: "grub rescue, no known filesystem" It may indicate that some changes have been made BUT when running Windows 7 installed some another error: "Windows cannot be installed to Disk 0 Partition 1" More detailed: "Windows cannot be installed to this hard disk space. Windows must be installed to a partition formatted as NTFS." So formatting drive using Windows 7 installer BUT this time yet another error: "Setup was unable to create a new system partition or locate an existing system partition. See the setup log files for more information" Apparently I cannot access logs (how?) and I am back to drawing board with my live USB (this time showing partition as HPFS/NTFS). Any suggestions how to install Windows 7? Should I reinstall Linux to HDD, erase existing partitions once again, and use Parted rather than gparted (parted is included in the distro). Or maybe should I create another bootable USB such as PartedMagic to painlessly create partitions? I just want to install Windows 7 from USB, my laptop is semi-operational and I am ready to receive some help regarding fdisk and creating NTFS partitions. UPDATE: I did as suggested (removed all the partitions) and tried to install in unallocated space. Tried to create a new partition and format it. Same error: "setup was unable to create a new system partition" Came to the conclusion it may have something to do with TrueCrypt I have recently installed. Right now trying to FIX MBR (as I haven't got possibility to create rescue disc without optical drive)

    Read the article

  • Delphi Unit local variables - how to make each instance unique?

    - by Justin
    Ok, this, I'm sure is something simple that is easy to do. The problem : I've inherited scary spaghetti code and am slowly trying to better it when new features need adding - generally when a refactor makes adding the new feature neater. I've got a bunch of code I'm packing into a single unit which, in different places in the application, controls the same physical thing in the outside world. The control appears in several places in the application and operates slightly differently in each instance. What I've done is to create a unit with all of the features I need which I can simply drop, as a frame, into each form that requires it. Each form then uses the unit's interface methods to customise the behaviour for each instance. The problem within the problem : In the unit in question (the frame) I have a variable declared in the IMPLEMENTATION section - local to the unit. I also have a procedure, declared in the TYPE section which takes an argument and assigns that argument to the local variable in question - each form passes a unique variable to each instance of the frame/unit. What I want it to do is for each instance of the frame to keep its own version of that variable, different from the others, and use that to define how it operates. What seems to be happening, however, is that all instances are using the same value, even if I explicitly pass each instance a different variable. ie: Unit FlexibleUnit; interface uses //the uses stuff type TFlexibleUnit=class(TFrame) //declarations including procedure makeThisInstanceX(passMeTheVar:integer); private // public // end; implementation uses //the uses var myLocalVar; procedure makeThisInstanceX(passMeTheVar:integer); begin myLocalVar:=passMeTheVar; end; //other procedures using myLocalVar //etc to the end; Now somewhere in another Form I've dropped this Frame onto the Design pane, sometimes two of these frames on one Form, and have it declared in the proper places, etc. Each is unique in that : ThisFlexibleUnit : TFlexibleUnit; ThatFlexibleUnit : TFlexibleUnit; and when I do a: ThisFlexibleUnit.makeThisInstanceX(var1); //want to behave in way "var1" ThatFlexibleUnit.makeThisInstanceX(var2); //want to behave in way "var2" it seems that they both share the same variable "myLocalVar". Am I doing this wrong, in principle? If this is the correct method then it's a matter of debugging what I have (which is too huge to post) but if this is not correct in principle then is there a way to do what I am suggesting? Thanks in advance, Stack Overflow - you guys (and gals!) are legendary.

    Read the article

  • Partition/install issues

    - by jalal ahmad
    I am new to Ubuntu and tried to install 10.1 as dual boot option from a USB. At first I encountered the error when in partition dialogue of installation process that cannot find root directory. I did a search on Ubuntu forums and did this as in one of the posts. Make sure that the partition file system you wish to install Linux, Ubuntu or Backtrack on it is ext4, ext3 or ext2, and not FAT32 or NTFS. Then mount / on it: During the installation process press "change" on the partition you wish to use Make sure "do not use this partition" scroll is not chosen, scroll to ext4, ext3 or ext2 On the "mount" field write / Click ok, then next a message will appear saying something like "swap area was not defined, do you wish to continue or choose a swap area?", click "ok" and continue or click "go back" and choose another partition and click change, on the file system scroll choose "swap" and click "ok" and next All good but when I rebooted I could not find Windows vista as in dual boot option. Plus I could not see wireless networks and in the process of trying to find out what went wrong the soft switch somehow turned off and as I cannot boot in Windows I have no idea what to do. Again searching internet I found a post which said the dual boot problem can be overcome by installing gparted but when I tried I got the message Reading package lists... Done Building dependency tree Reading state information.. Done E: Couldn't find package gparted I thought I am going to copy my stuff from my hard disk and try to install Windows but I found out that I have two partitions which are different from what I had before installing Ubuntu. I now have filesystem partition1 119 GB ext4, swap partition 5 1.1 GB swap and extended partition 2 1.1 GB. And I cannot mount 119 GB where all my personal videos, photos are if still there. Now I cannot boot from Windows even. Need help on what to do? Best case scenario would be to be able to copy my stuff before I mess up the system further. Else a dual boot system and if not then how do I install vista again. I have Windows CD. Cheers guys and thanks in advance.

    Read the article

  • Fix single entry from mbr

    - by Sander
    I use EasyBCD to manage my tripleboot of (1) Windows Server 2008 R2, (2) Windows 7 Professional and (3) Ubuntu Linux. While trying to change the order of my boot menu I ended up losing the Windows Server entry. Luckily I had a boot menu backup (.bcd file) that allowed me to restore my boot menu using EasyBCD. However, when I now select the Windows Server option in my boot menu the Windows Server Recovery Environment starts up. So I have to select language/keyboard layout/etc. and then I have 3 options as shown in the image below. . My goal is to fix the one corrupted Windows Server entry from my boot menu without messing up or losing the two other ones. I'm guessing the Recovery Console (Command Prompt) is the next step and that I will be needing bootrec.exe. But when consulting this page: Use the Bootrec.exe tool in the Windows Recovery Environment to troubleshoot and repair startup issues in Windows (about half way down there's a link that shows the bootrec.exe options) I'm getting uncertain. The page lists 4 options for bootrec.exe : /FixMbr /FixBoot /ScanOs /RebuildBcd What option do I need to fix just the server entry of my boot menu? Thanks in advance, Sander P.S. All three OS's are on the same physical disk (3 different partitions). Disk layout: System reserved (primary partition, 100 MB) Windows 7 (primary parition, 150 GB) Windows Server 2008 (primary partition, 150 GB) Extended partition (linux partitions (/,/swap,/home), 150GB + data partition, 150 GB) P.P.S. This is what my boot menu looks like using EasyBCD (Detailed/Debug mode) on my Windows 7 installation. Windows Boot Manager -------------------- identifier {9dea862c-5cdd-4e70-acc1-f32b344d4795} device partition=\Device\HarddiskVolume1 description Windows Boot Manager locale en-US inherit {7ea2e1ac-2e61-4728-aaa3-896d9d0a9f0e} default {93f90e43-cae8-11df-b05a-c9177e705936} resumeobject {93f90e3e-cae8-11df-b05a-c9177e705936} displayorder {93f90e43-cae8-11df-b05a-c9177e705936} {93f90e3f-cae8-11df-b05a-c9177e705936} {93f90e46-cae8-11df-b05a-c9177e705936} toolsdisplayorder {b2721d73-1db4-4c62-bf78-c548a880142d} timeout 10 displaybootmenu Yes Windows Boot Loader ------------------- identifier {93f90e43-cae8-11df-b05a-c9177e705936} device partition=\Device\HarddiskVolume3 path \Windows\system32\winload.exe description Windows Server 2008 R2 - Standard locale en-US inherit {6efb52bf-1766-41db-a6b3-0ee5eff72bd7} recoverysequence {93f90e44-cae8-11df-b05a-c9177e705936} recoveryenabled Yes osdevice partition=\Device\HarddiskVolume3 systemroot \Windows resumeobject {93f90e42-cae8-11df-b05a-c9177e705936} nx OptOut Windows Boot Loader ------------------- identifier {93f90e3f-cae8-11df-b05a-c9177e705936} device partition=C: path \Windows\system32\winload.exe description Windows 7 - Professional locale nl-NL inherit {6efb52bf-1766-41db-a6b3-0ee5eff72bd7} recoverysequence {93f90e40-cae8-11df-b05a-c9177e705936} recoveryenabled Yes osdevice partition=C: systemroot \Windows resumeobject {93f90e3e-cae8-11df-b05a-c9177e705936} nx OptIn Real-mode Boot Sector --------------------- identifier {93f90e46-cae8-11df-b05a-c9177e705936} device partition=C: path \NST\AutoNeoGrub0.mbr description Ubuntu 10.04 - Lucid Lynx

    Read the article

  • multithreading with database

    - by Darsin
    I am looking out for a strategy to utilize multithreading (probably asynchronous delegates) to do a synchronous operation. I am new to multithreading so i will outline my scenario first. This synchronous operation right now is done for one set of data (portfolio) based on the the parameters provided. The (psudeo-code) implementation is given below: public DataSet DoTests(int fundId, DateTime portfolioDate) { // Get test results for the portfolio // Call the database adapter method, which in turn is a stored procedure, // which in turns runs a series of "rule" stored procs and fills a local temp table and returns it back. DataSet resultsDataSet = GetTestResults(fundId, portfolioDate); try { // Do some local processing on the results DoSomeProcessing(resultsDataSet); // Save the results in Test, TestResults and TestAllocations tables in a transaction. // Sets a global transaction which is provided to all the adapter methods called below // It is defined in the Base class StartTransaction("TestTransaction"); // Save Test and get a testId int testId = UpdateTest(resultsDataSet); // Adapter method, uses the same transaction // Update testId in the other tables in the dataset UpdateTestId(resultsDataSet, testId); // Update TestResults UpdateTestResults(resultsDataSet); // Adapter method, uses the same transaction // Update TestAllocations UpdateTestAllocations(resultsDataSet); // Adapter method, uses the same transaction // It is defined in the base class CommitTransaction("TestTransaction"); } catch { RollbackTransaction("TestTransaction"); } return resultsDataSet; } Now the requirement is to do it for multiple set of data. One way would be to call the above DoTests() method in a loop and get the data. I would prefer doing it in parallel. But there are certain catches: StartTransaction() method creates a connection (and transaction) every time it is called. All the underlying database tables, procedures are the same for each call of DoTests(). (obviously). Thus my question are: Will using multithreading anyway improve performance? What are the chances of deadlock especially when new TestId's are being created and the Tests, TestResults and TestAllocations are being saved? How can these deadlocked be handled? Is there any other more efficient way of doing the above operation apart from looping over the DoTests() method repeatedly?

    Read the article

  • Loading the last related record instantly for multiple parent records using Entity framework

    - by Guillaume Schuermans
    Does anyone know a good approach using Entity Framework for the problem described below? I am trying for our next release to come up with a performant way to show the placed orders for the logged on customer. Of course paging is always a good technique to use when a lot of data is available I would like to see an answer without any paging techniques. Here's the story: a customer places an order which gets an orderstatus = PENDING. Depending on some strategy we move that order up the chain in order to get it APPROVED. Every change of status is logged so we can see a trace for statusses and maybe even an extra line of comment per status which can provide some extra valuable information to whoever sees this order in an interface. So an Order is linked to a Customer. One order can have multiple orderstatusses stored in OrderStatusHistory. In my testscenario I am using a customer which has 100+ Orders each with about 5 records in the OrderStatusHistory-table. I would for now like to see all orders in one page not using paging where for each Order I show the last relevant Status and the extra comment (if there is any for this last status; both fields coming from OrderStatusHistory; the record with the highest Id for the given OrderId). There are multiple scenarios I have tried, but I would like to see any potential other solutions or comments on the things I have already tried. Trying to do Include() when getting Orders but this still results in multiple queries launched on the database. Each order triggers an extra query to the database to get all orderstatusses in the history table. So all statusses are queried here instead of just returning the last relevant one, plus 100 extra queries are launched for 100 orders. You can imagine the problem when there are 100000+ orders in the database. Having 2 computed columns on the database: LastStatus, LastStatusInformation and a regular Linq-Query which gets those columns which are available through the Entity-model. The problem with this approach is the fact that those computed columns are determined using a scalar function which can not be changed without removing the formula from the computed column, etc... In the end I am very familiar with SQL and Stored procedures, but since the rest of the data-layer uses Entity Framework I would like to stick to it as long as possible, even though I have my doubts about performance. Using the SQL approach I would write something like this: WITH cte (RN, OrderId, [Status], Information) AS ( SELECT ROW_NUMBER() OVER (PARTITION BY OrderId ORDER BY Id DESC), OrderId, [Status], Information FROM OrderStatus ) SELECT o.Id, cte.[Status], cte.Information AS StatusInformation, o.* FROM [Order] o INNER JOIN cte ON o.Id = cte.OrderId AND cte.RN = 1 WHERE CustomerId = @CustomerId ORDER BY 1 DESC; which returns all orders for the customer with the statusinformation provided by the Common Table Expression. Does anyone know a good approach using Entity Framework?

    Read the article

  • BizTalk - generating schema from Oracle stored proc with table variable argument

    - by Ron Savage
    I'm trying to set up a simple example project in BizTalk that gets changes made to a table in a SQL Server db and updates a copy of that table in an Oracle db. On the SQL Server side, I have a stored proc named GetItemChanges() that returns a variable number of records. On the Oracle side, I have a stored proc named Update_Item_Region_Table() designed to take a table of records as a parameter so that it can process all the records returned from GetItemChanges() in one call. It is defined like this: create or replace type itemrec is OBJECT ( UPC VARCHAR2(15), REGION VARCHAR2(5), LONG_DESCRIPTION VARCHAR2(50), POS_DESCRIPTION VARCHAR2(30), POS_DEPT VARCHAR2(5), ITEM_SIZE VARCHAR2(10), ITEM_UOM VARCHAR2(5), BRAND VARCHAR2(10), ITEM_STATUS VARCHAR2(5), TIME_STAMP VARCHAR2(20), COSTEDBYWEIGHT INTEGER ); create or replace type tbl_of_rec is table of itemrec; create or replace PROCEDURE Update_Item_Region_table ( Item_Data tbl_of_rec ) IS errcode integer; errmsg varchar2(4000); BEGIN for recIndex in 1 .. Item_Data.COUNT loop update FL_ITEM_REGION_TEST set Region = Item_Data(recIndex).Region, Long_description = Item_Data(recIndex).Long_description, Pos_Description = Item_Data(recIndex).Pos_description, Pos_Dept = Item_Data(recIndex).Pos_dept, Item_Size = Item_Data(recIndex).Item_Size, Item_Uom = Item_Data(recIndex).Item_Uom, Brand = Item_Data(recIndex).Brand, Item_Status = Item_Data(recIndex).Item_Status, Timestamp = to_date(Item_Data(recIndex).Time_stamp, 'yyyy-mm-dd HH24:mi:ss'), CostedByWeight = Item_Data(recIndex).CostedByWeight where UPC = Item_Data(recIndex).UPC; log_message(Item_Data(recIndex).Region, '', 'Updated item ' || Item_Data(recIndex).UPC || '.'); end loop; EXCEPTION WHEN OTHERS THEN errcode := SQLCODE(); errmsg := SQLERRM(); log_message('CE', '', 'Error in Update_Item_Region_table(): Code [' || errcode || '], Msg [' || errmsg || '] ...'); END; In my BizTalk project I generate the schemas and binding information for both stored procedures. For the Oracle procedure, I specified a path for the GeneratedUserTypesAssemblyFilePath parameter to generate a DLL to contain the definition of the data types. In the Send Port on the server, I put the path to that Types DLL in the UserAssembliesLoadPath parameter. I created a map to translate the GetItemChanges() schema to the Update_Item_Region_Table() schema. When I run it the data is extracted and transformed fine but causes an exception trying to pass the data to the Oracle proc: *The adapter failed to transmit message going to send port "WcfSendPort_OracleDBBinding_HOST_DATA_Procedure_Custom" with URL "oracledb://dvotst/". It will be retransmitted after the retry interval specified for this Send Port. Details:"System.InvalidOperationException: Custom type mapping for 'HOST_DATA.TBL_OF_REC' is not specified or is invalid.* So it is apparently not getting the information about the custom data type TBL_OF_REC into the Types DLL. Any tips on how to make this work?

    Read the article

  • Wifi randomly drops on Windows 8 laptop

    - by JosiahS
    First of all, I did a lot of research on this problem, and I wasn't able to come to any helpful conclusion. I've finally decided that I need advice from those who might know where to look. So don't let me down. :P I used to have an older Windows 7 laptop, which worked great for basic office and web browsing. However, I wanted something that would play actual modern games. So I recently bought a Sager NP8235 with the Intel Wireless-AC 7260 wifi card, and installed Windows 8 Pro on it. And ever since, I've been having problems with the wifi. Generally, what happens is if I leave the laptop on but inactive for an extended amount of time (I've estimated it around an hour to two), the wifi will start dropping randomly. If I happened to have a download going at the time, it usually causes the download to fail. Or, if I put the laptop to sleep overnight, the next morning I usually have to restart the computer because the wifi device apparently stops working (it literally won't turn on). Also, and most frustrating, whenever I'm on a video chat (like Skype), after about ten minutes, the connection will start lagging like crazy, until it forces Skype to end the call. After that, I usually have to disable and reenable the wifi to get it working again. I know it isn't our internet, because all the other computers in our house (~8) don't have any issues. Even the old Windows 7 laptop (connected also over wifi) works just fine, scoring the normal ~3Mbps average on speedtest.net (yes, I know our internet is slow, we live out in the country). Additionally, when I connect the Sager directly to the router via ethernet, the internet instantly starts working just great. Like I said, I've done a lot of googling to figure out what's going on, and I haven't been able to find anything that worked for me. Is it Windows 8 conflicting with the Wifi drivers? As of this writing, I have the Intel drivers v16.1.5.2 installed (without the extra Intel software). Or is it our router? It's a TP-Link TL-WR841ND, set to the default settings. The Sager is currently being assigned to a static IP, if that makes any difference. And yet, the old windows 7 laptop has a much more stable connection than the Sager. Anyone have any ideas? At this point, I'd appreciate even knowing what the problem is.

    Read the article

  • Database schema for simple stats project

    - by Bubnoff
    Backdrop: I have a file hierarchy of cvs files for multiple locations named by dates they cover ...by month specifically. Each cvs file in the folder is named after the location. eg', folder name: 2010-feb contains: location1.csv location2.csv Each CSV file holds records like this: 2010-06-28, 20:30:00 , 0 2010-06-29, 08:30:00 , 0 2010-06-29, 09:30:00 , 0 2010-06-29, 10:30:00 , 0 2010-06-29, 11:30:00 , 0 meaning of record columns ( column names ): Date, time, # of sessions I have a perl script that pulls the data from this mess and originally I was going to store it as json files, but am thinking a database might be more appropriate long term ...comparing year to year trends ...fun stuff like that. Pt 2 - My question/problem: So I now have a REST service that coughs up json with a test database. My question is [ I suck at db design ], how best to design a database backend for this? I am thinking the following tables would suffice and keep it simple: Location: (PK)location_code, name session: (PK)id, (FK)location_code, month, hour, num_sessions I need to be able to average sessions (plus min and max) for each hour across days of week in addition to days of week in a given month or months. I've been using perl hashes to do this and am trying to decide how best to implement this with a database. Do you think stored procedures should be used? As to the database, depending on info gathered here, it will be postgresql or sqlite. If there is no compelling reason for postgresql I'll stick with sqlite. How and where should I compare the data to hours of operation. I am storing the hours of operation in a yaml file. I currently 'match' the hour in the data to a hash from the yaml to do this. Would a database open simpler methods? I am thinking I would do this comparison as I do now then insert the data. Can be recalled with: SELECT hour, num_sessions FROM session WHERE location_code=LOC1 Since only hours of operation are present, I do not need to worry about it. Should I calculate all results as I do now then store as a stats table for different 'reports'? This, rather than processing on demand? How would this look? Anyway ...I ramble. Thanks for reading! Bubnoff

    Read the article

  • What common routines do you put in your Program.cs for C#

    - by Rick
    I'm interested in any common routine/procedures/methods that you might use in you Program.cs when creating a .NET project. For instance I commonly use the following code in my desktop applications to allow easy upgrades, single instance execution and friendly and simple reporting of uncaught system application errors. using System; using System.Diagnostics; using System.Threading; using System.Windows.Forms; namespace NameoftheAssembly { internal static class Program { /// <summary> /// The main entry point for the application. Modified to check for another running instance on the same computer and to catch and report any errors not explicitly checked for. /// </summary> [STAThread] private static void Main() { //for upgrading and installing newer versions string[] arguments = Environment.GetCommandLineArgs(); if (arguments.GetUpperBound(0) > 0) { foreach (string argument in arguments) { if (argument.Split('=')[0].ToLower().Equals("/u")) { string guid = argument.Split('=')[1]; string path = Environment.GetFolderPath(Environment.SpecialFolder.System); var si = new ProcessStartInfo(path + "\\msiexec.exe", "/x" + guid); Process.Start(si); Application.Exit(); } } //end of upgrade } else { bool onlyInstance = false; var mutex = new Mutex(true, Application.ProductName, out onlyInstance); if (!onlyInstance) { MessageBox.Show("Another copy of this running"); return; } AppDomain.CurrentDomain.UnhandledException += CurrentDomain_UnhandledException; Application.ThreadException += ApplicationThreadException; Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); Application.Run(new Form1()); } } private static void CurrentDomain_UnhandledException(object sender, UnhandledExceptionEventArgs e) { try { var ex = (Exception) e.ExceptionObject; MessageBox.Show("Whoops! Please contact the developers with the following" + " information:\n\n" + ex.Message + ex.StackTrace, " Fatal Error", MessageBoxButtons.OK, MessageBoxIcon.Stop); } catch (Exception) { //do nothing - Another Exception! Wow not a good thing. } finally { Application.Exit(); } } public static void ApplicationThreadException(object sender, ThreadExceptionEventArgs e) { try { MessageBox.Show("Whoops! Please contact the developers with the following" + " information:\n\n" + e.Exception.Message + e.Exception.StackTrace, " Error", MessageBoxButtons.OK, MessageBoxIcon.Stop); } catch (Exception) { //do nothing - Another Exception! Wow not a good thing. } } } } I find these routines to be very helpful. What methods have you found helpful in Program.cs?

    Read the article

  • What would cause my SendMail server not to acknowledge receiving a TCP Sequence?

    - by Mike B
    My TCP/IP Stack knowledge is a little rusty so please bear with me.... I have a CentOS 5.7 server with SendMail and am having seeing intermittent timeout issues sending email (particularly larger email) to other remote domains. It doesn't happen with all attachments or recipient domains. Just some. After some extended troubleshooting, I think I've narrowed it down to TCP Sequences not being acknowledged. Here's a breakdown of the TCP session from a packet capture I collected directly on my MTA (fooMTA): Packet 1 - 11: Standard TCP handshake followed by initial SMTP conversation. No errors. Packet #12 Recipient MTA: TCP sequence 231. Ack 91. Packet #13 FooMTA: TCP sequence 91. Ack 305. Packet #14 FooMTA: TCP sequence 1115. Ack 305. Packet #15 Recipient MTA: TCP sequence 305. Ack 2495. Packet #16 FooMTA: TCP sequence 2495. Ack 305. Packet #17 FooMTA: TCP sequence 5255. Ack 305. Packet #18: Recipient MTA: TCP sequence 305. Ack 5255. Packet #19: FooMTA: TCP sequence 6635. Ack 305. Packet #20: FooMTA: TCP sequence 8015. Ack 305. Packet #21: Recipient MTA: TCP Sequence 305. Ack 8015. Packet #22: FooMTA: TCP Sequence 10775. Ack 305. Packet #23: FooMTA: TCP Sequence 13535. Ack 305. Packet #24: Recipient MTA: TCP sequence 305. Ack 10775 Packet #25: FooMTA: TCP Sequence 14915. Ack 305 It keeps going like this with my server still thinking it hasn’t received sequence 305… in response the remote side eventually retransmits its prior data thinking that it never arrived. Eventually the gap gets so large that no new data is sent and the remote MTA keeps retransmitting old stuff. This contributes to an exponential backoff and eventually the remote side gives up. What’s strange to me is that I see the “missing” TCP sequence (305 in this case) arriving back to my server (via a packet capture collected directly from fooMTA) So I don’t get why my server keeps asking for it. Could this be firewall related? What would be the next step in troubleshooting?

    Read the article

  • What can cause System.Move to occasionaly give wrong results?

    - by Fredrik Loftheim
    The last few days we have had some strange problems with our database components developed by a third party. There has been no changes to these components for months. The code that HAS changed the last few days is our own code and we have also updated our gui-components developed by another third party. After debugging I have found that a call to System.Move in one of the database component procedures occationaly gives wrong results! Please take a look at the code below from the database components and read my comments. How can this inconsistent behaviour happen? Can anyone give me an idea of how to procede to find the cause of this inconsistent behaviour? NB! I dont think there is anything wrong with THIS code, it is only shown to explain the problem "symptoms". My guess is that there is some sort of memory corruption or something, caused by our code or the updated gui-component-code. Edit: Take a look at the blogpost linked below. It seems that it could be related to my problem. At least as I read it it confirms that System.Move can give wrong results: http://blog.excastle.com/2007/08/28/delphi-bug-of-the-day-fpu-stack-leak/ Procedure InternalDescribe; var cbufl: sb4; //sb4=LongInt cbuf: array[0..30] of char; cbufp: PChar; //.... begin //..Some code repeat //...Some code to initialize cbufp and cbufl //On the 15. iteration the values immediately Before Move are always these: //cbufp = 'STDPRODUCTSTOREDELEMENTSCOUNT' //cbuf = ('S', 'T', 'A', 'T', 'U', 'S', #0, 'E', 'V', 'A', 'R', 'R', 'E', 'C', 'I', 'D', #0, 'D', 'U', 'C', 'T', 'I', 'D', #0, #0, #0, #0, #0, #0, #0, #0) //cbufl = 29 Move(cbufp^, cbuf, cbufl); //Values immediately After Move should then be: //cbuf = ('S', 'T', 'D', 'P', 'R', 'O', 'D', 'U', 'C', 'T', 'S', 'T', 'O', 'R', 'E', 'D', 'E', 'L', 'E', 'M', 'E', 'N', 'T', 'S', 'C', 'O', 'U', 'N', 'T', #0, #0) //But sometimes this Move results in this value( 1 in 5..15 times): //cbuf = ('S', 'T', 'D', 'P', 'R', 'O', 'D', 'U', 'C', 'T', 'S', 'T', 'O', 'R', 'E', 'D', #0, #0, #0, #0, #0, 'N', 'T', 'S', 'C', 'O', 'U', 'N', 'T', #0, #0) } until SomeCondition; //...Some more code end;

    Read the article

  • How should I ethically approach user password storage for later plaintext retrieval?

    - by Shane
    As I continue to build more and more websites and web applications I am often asked to store user's passwords in a way that they can be retrieved if/when the user has an issue (either to email a forgotten password link, walk them through over the phone, etc.) When I can I fight bitterly against this practice and I do a lot of ‘extra’ programming to make password resets and administrative assistance possible without storing their actual password. When I can’t fight it (or can’t win) then I always encode the password in some way so that it at least isn’t stored as plaintext in the database—though I am aware that if my DB gets hacked that it won’t take much for the culprit to crack the passwords as well—so that makes me uncomfortable. In a perfect world folks would update passwords frequently and not duplicate them across many different sites—unfortunately I know MANY people that have the same work/home/email/bank password, and have even freely given it to me when they need assistance. I don’t want to be the one responsible for their financial demise if my DB security procedures fail for some reason. Morally and ethically I feel responsible for protecting what can be, for some users, their livelihood even if they are treating it with much less respect. I am certain that there are many avenues to approach and arguments to be made for salting hashes and different encoding options, but is there a single ‘best practice’ when you have to store them? In almost all cases I am using PHP and MySQL if that makes any difference in the way I should handle the specifics. Additional Information for Bounty I want to clarify that I know this is not something you want to have to do and that in most cases refusal to do so is best. I am, however, not looking for a lecture on the merits of taking this approach I am looking for the best steps to take if you do take this approach. In a note below I made the point that websites geared largely toward the elderly, mentally challenged, or very young can become confusing for people when they are asked to perform a secure password recovery routine. Though we may find it simple and mundane in those cases some users need the extra assistance of either having a service tech help them into the system or having it emailed/displayed directly to them. In such systems the attrition rate from these demographics could hobble the application if users were not given this level of access assistance, so please answer with such a setup in mind. Thanks to Everyone This has been a fun questions with lots of debate and I have enjoyed it. In the end I selected an answer that both retains password security (I will not have to keep plain text or recoverable passwords), but also makes it possible for the user base I specified to log into a system without the major drawbacks I have found from normal password recovery. As always there were about 5 answers that I would like to have marked correct for different reasons, but I had to choose the best one--all the rest got a +1. Thanks everyone!

    Read the article

  • USB hard drive not recognized

    - by user318772
    Until recently I was using the portable USB hard drive in my win 7 laptop and ubuntu laptop. Suddenly now none of the laptops recognize it. This is the message i get by doing lsusb... Bus 001 Device 004: ID 1058:1010 Western Digital Technologies, Inc. Elements External HDD Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 003: ID 0b97:7762 O2 Micro, Inc. Oz776 SmartCard Reader Bus 003 Device 002: ID 0b97:7761 O2 Micro, Inc. Oz776 1.1 Hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 002: ID 413c:a005 Dell Computer Corp. Internal 2.0 Hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub fdisk doesn't show the external hard drive Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders, total 156301488 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0004a743 Device Boot Start End Blocks Id System /dev/sda1 * 2048 152111103 76054528 83 Linux /dev/sda2 152113150 156301311 2094081 5 Extended /dev/sda5 152113152 156301311 2094080 82 Linux swap / Solaris when i do testdisk TestDisk 6.14, Data Recovery Utility, July 2013 Christophe GRENIER <[email protected]> http://www.cgsecurity.org TestDisk is free software, and comes with ABSOLUTELY NO WARRANTY. Select a media (use Arrow keys, then press Enter): >Disk /dev/sda - 80 GB / 74 GiB - ST980825AS Disk /dev/sdb - 2199 GB / 2048 GiB testdisk-> Intel->analyse I get partition error Disk /dev/sdb - 2199 GB / 2048 GiB - CHS 2097152 64 32 Current partition structure: Partition Start End Size in sectors Partition: Read error Here is the output of dmesg [11948.549171] Add. Sense: Invalid command operation code [11948.549177] sd 2:0:0:0: [sdb] CDB: [11948.549181] Read(16): 88 00 00 00 00 00 00 00 00 00 00 00 00 08 00 00 [11948.550489] sd 2:0:0:0: [sdb] Invalid command failure [11948.550495] sd 2:0:0:0: [sdb] [11948.550499] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [11948.550505] sd 2:0:0:0: [sdb] [11948.550508] Sense Key : Illegal Request [current] [11948.550514] Info fld=0x0 [11948.550519] sd 2:0:0:0: [sdb] [11948.550525] Add. Sense: Invalid command operation code [11948.550531] sd 2:0:0:0: [sdb] CDB: [11948.550534] Read(16): 88 00 00 00 00 00 00 00 00 00 00 00 00 08 00 00 [11948.551870] sd 2:0:0:0: [sdb] Invalid command failure [11948.551876] sd 2:0:0:0: [sdb] [11948.551880] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [11948.551885] sd 2:0:0:0: [sdb] [11948.551888] Sense Key : Illegal Request [current] [11948.551895] Info fld=0x0 [11948.551900] sd 2:0:0:0: [sdb] [11948.551905] Add. Sense: Invalid command operation code [11948.551911] sd 2:0:0:0: [sdb] CDB: [11948.551914] Read(16): 88 00 00 00 00 00 00 00 00 00 00 00 00 08 00 00 If possible i want to retrive at least some data from this hard drive. If thats not possible I would like to format it and use it. Any help will be greatly appreciated Thanks

    Read the article

  • Database Schema Versioning Strategies

    - by Jack Ryan
    I work on a project that uses a reasonably large database, the live version weighing in at somewhere around 60-80GB. The live database is the only real definitive source of our schema, and because of its size duplicating this database is too slow to be done often. This means we have ended up developing our database schema in a pretty ad hoc way, using sql compare to migrate changes from dev dbs to the live system, and only wiping our dev dbs every month or two. I am hoping to get some pointers on how to improve our database development work flow so that we have a little more control. Some things to think about: Currently nobody is really in charge of the database schema, all developers can change it if they need to, though generally these decisions are talked about before they are done. There are stored procedures, functions, and views in the database. These should probably be dumped to files so they can be reloaded on every build. Schema changes should probably be checked in as scripts. We have started to do this recently. However all our scripts must then be numbered (because there may be dependencies between them), and must be re runnable (because our build script currently runs them all in order). This makes them hard to read because they are full of conditionals that check whether tables or columns already exist. This is a step that is often forgotten by developers. Getting a new database should be quick and easy. This is currently a big problem, it takes several hours to get a copy of last nights backup and restore it onto a dev machine. Some mechanism needs to be in place to allow developers to update static data. We have tables that contain data that is never updated through the application, but does potentially need to be changed when we do a new release (often this drives dropdowns). The whole thing needs to be runnable as part of a build script. Are there any tools that can be used to help to do this? Eventually I would like to be at a point where a new DB can be built from scratch without copying any data from the live system. I don't mind writing some scripts to glue all the steps together but each part should be easily editable so that we continue to use it rather than make changes directly on DBs.

    Read the article

< Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >