Search Results

Search found 13808 results on 553 pages for 'remote storage'.

Page 76/553 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • Convert ASP.NET membership system to secure password storage

    - by wrburgess
    I have a potential client that set up their website and membership system in ASP.NET 3.5. When their developer set up the system, it seems he turned off the security/hashing aspect of password storage and everything is stored in the clear. Is there a process to reinstall/change the secure password storage of ASP.NET membership without changing all of the passwords in the database? The client is worried that they'll lose their customers if they all have to go through a massive password change. I've always installed with security on by default, thus I don't know the effect of a switchover. Is there a way to convert the entire system to a secure password system without major effects on the users?

    Read the article

  • Silverlight Isolated Storage and loading big files

    - by Thomas Joulin
    In a Windows Phone 7 application, I would like to query a big XML file (list of cities) stored using Isolated Storage. If I do that this way, will the file be loaded to memory ( 5 mo) ? If so, what other solution do I have? Edit: More details. I want to use AutoCompleteBox (http://www.jeff.wilcox.name/2008/10/introducing-autocompletebox/), but instead of using a web service (this is fixed data, no need to be online), I want to query a file/database/isolated storage... I have a fixed list of cities. I said in the comments it's 40k, but it finally seems closer to 1k rows.

    Read the article

  • Moving webshop storage to NoSQL solution

    - by mare
    If you had a webshop solution based on SQL Server relational DB what would be the reasons, if any, to move to NoSQL storage ? Does it even make sense to migrate datastores that rely on relations heavily to NoSQL? If starting from scratch, would you choose NoSQL solution over relational one for a webshop project, which will, after a while, again end up with a bunch of tables like Articles, Classifications, TaxRates, Pricelists etc. and a magnitude of relations between them? What's the support like in .NET (4.0) for MongoDB or MongoDB's support for .NET 4.0? Can I count on rich code generation tools similar to EF wizard, L2SQL wizard etc. for MongoDB? Because as what I have read so far, NoSQL's are mostly suited for document storage, simpler object models. Your answer to this question will help me make the right infrastructure design decisions.

    Read the article

  • MySQL storage engine dilemma

    - by burntblark
    There are two MySQL database features that I want to use in my application. The first is FULL-TEXT-SEARCH and TRANSACTIONS. Now, the dilemma here is that I cannot get this feature in one storage engine. It's either I use MyIsam (which has the FULL-TEXT-SEARCH feature) or I use InnoDB (which supports the TRANSACTION feature). I can't have both. My question is, is there anyway I can have both features in my application before I am forced to make a choice between the two storage engines.

    Read the article

  • Rapid Repository – Silverlight Development

    - by SeanMcAlinden
    Hi All, One of the questions I was recently asked was whether the Rapid Repository would work for normal Silverlight development as well as for the Windows 7 Phone. I can confirm that the current code in the trunk will definitely work for both the Windows 7 Phone and normal Silverlight development. I haven’t tested V.1.0 for compatibility but V2.0 which will be released fairly soon will work absolutely fine.   Kind Regards, Sean McAlinden.

    Read the article

  • Windows 7 Phone Database Rapid Repository – V2.0 Beta Released

    - by SeanMcAlinden
    Hi All, A V2.0 beta has been released for the Windows 7 Phone database Rapid Repository, this can be downloaded at the following: http://rapidrepository.codeplex.com/ Along with the new View feature which greatly enhances querying and performance, various bugs have been fixed including a more serious bug with the caching that caused the GetAll() method to sometimes return inconsistent results (I’m a little bit embarrased by this bug). If you are currently using V1.0 in development, I would recommend swapping in the beta immediately. A full release will be available very shortly, I just need a few more days of testing and some input from other users/testers.   *Breaking Changes* The only real change is the RapidContext has moved under the main RapidRepository namespace. Various internal methods have been actually made ‘internal’ and replaced with a more friendly API (I imagine not many users will notice this change). Hope you like it Kind Regards, Sean McAlinden

    Read the article

  • Windows 7 Phone Database – Querying with Views and Filters

    - by SeanMcAlinden
    I’ve just added a feature to Rapid Repository to greatly improve how the Windows 7 Phone Database is queried for performance (This is in the trunk not in Release V1.0). The main concept behind it is to create a View Model class which would have only the minimum data you need for a page. This View Model is then stored and retrieved rather than the whole list of entities. Another feature of the views is that they can be pre-filtered to even further improve performance when querying. You can download the source from the Microsoft Codeplex site http://rapidrepository.codeplex.com/. Setting up a view Lets say you have an entity that stores lots of data about a game result for example: GameScore entity public class GameScore : IRapidEntity {     public Guid Id { get; set; }     public string GamerId {get;set;}     public string Name { get; set; }     public Double Score { get; set; }     public Byte[] ThumbnailAvatar { get; set; }     public DateTime DateAdded { get; set; } }   On your page you want to display a list of scores but you only want to display the score and the date added, you create a View Model for displaying just those properties. GameScoreView public class GameScoreView : IRapidView {     public Guid Id { get; set; }     public Double Score { get; set; }     public DateTime DateAdded { get; set; } }   Now you have the view model, the first thing to do is set up the view at application start up. This is done using the following syntax. View Setup public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score }); } As you can see, using a little bit of lambda syntax, you put in the code for constructing a single view, this is used internally for mapping an entity to a view. *Note* you do not need to map the Id property, this is done automatically, a view model id will always be the same as it’s corresponding entity.   Adding Filters One of the cool features of the view is that you can add filters to limit the amount of data stored in the view, this will dramatically improve performance. You can add multiple filters using the fluent syntax if required. In this example, lets say that you will only ever show the scores for the last 10 days, you could add a filter like the following: Add single filter public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score })         .AddFilter(x => x.DateAdded > DateTime.Now.AddDays(-10)); } If you wanted to further limit the data, you could also say only scores above 100: Add multiple filters public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score })         .AddFilter(x => x.DateAdded > DateTime.Now.AddDays(-10))         .AddFilter(x => x.Score > 100); }   Querying the view model So the important part is how to query the data. This is done using the repository, there is a method called Query which accepts the type of view as a generic parameter (you can have multiple View Model types per entity type) You can either use the result of the query method directly or perform further querying on the result is required. Querying the View public void DisplayScores() {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     List<GameScoreView> scores = repository.Query<GameScoreView>();       // display logic } Further Filtering public void TodaysScores() {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     List<GameScoreView> todaysScores = repository.Query<GameScoreView>().Where(x => x.DateAdded > DateTime.Now.AddDays(-1)).ToList();       // display logic }   Retrieving the actual entity Retrieving the actual entity can be done easily by using the GetById method on the repository. Say for example you allow the user to click on a specific score to get further information, you can use the Id populated in the returned View Model GameScoreView and use it directly on the repository to retrieve the full entity. Get Full Entity public void GetFullEntity(Guid gameScoreViewId) {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     GameScore fullEntity = repository.GetById(gameScoreViewId);       // display logic } Synchronising The View If you are upgrading from Rapid Repository V1.0 and are likely to have data in the repository already, you will need to perform a synchronisation to ensure the views and entities are fully in sync. You can either do this as a one off during the application upgrade or if you are a little more cautious, you could run this at each application start up. Synchronise the view public void MyUpgradeTasks() {     RapidRepository<GameScore>.SynchroniseView<GameScoreView>(); } It’s worth noting that in normal operation, the view keeps itself in sync with the entities so this is only really required if you are upgrading from V1.0 to V2.0 when it gets released shortly.   Summary I really hope you like this feature, it will be great for performance and I believe supports good practice by promoting the use of View Models for specific pages. I’m hoping to produce a beta for this over the next few days, I just want to add some more tests and hopefully iron out any bugs. I would really appreciate any thoughts on this feature and would really love to know of any bugs you find. You can download the source from the following : http://rapidrepository.codeplex.com/ Kind Regards, Sean McAlinden.

    Read the article

  • SQLAuthority News – SQL Server Technical Article – The Data Loading Performance Guide

    - by pinaldave
    The white paper describes load strategies for achieving high-speed data modifications of a Microsoft SQL Server database. “Bulk Load Methods” and “Other Minimally Logged and Metadata Operations” provide an overview of two key and interrelated concepts for high-speed data loading: bulk loading and metadata operations. After this background knowledge, white paper describe how these methods can be [...]

    Read the article

  • Trace Flag 610 – When should you use it?

    - by simonsabin
    Thanks to Marcel van der Holst for providing this great information on the use of Trace Flag 610. This trace flag can be used to have minimal logging into a b tree (i.e. clustered table or an index on a heap) that already has data. It is a trace flag because in testing they found some scenarios where it didn’t perform as well. Marcel explains why below. “ TF610 can be used to get minimal logging in a non-empty B-Tree. The idea is that when you insert a large amount of data, you don't want to...(read more)

    Read the article

  • Access and Manage Your Ubuntu One Account in Chrome and Iron

    - by Asian Angel
    Do you have an Ubuntu One account that you access across different operating systems? Whether you are using Ubuntu, a different flavor of Linux, Windows, or Mac the Ubuntu One web app makes it easy to access and manage your Ubuntu One account in just moments. The Ubuntu One web app will definitely be useful if you find yourself away from your favorite Ubuntu computer but need to get important files uploaded to your account. Ubuntu One [Chrome Web Store] Latest Features How-To Geek ETC Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions Access and Manage Your Ubuntu One Account in Chrome and Iron Mouse Over YouTube Previews YouTube Videos in Chrome Watch a Machine Get Upgraded from MS-DOS to Windows 7 [Video] Bring the Whole Ubuntu Gang Home to Your Desktop with this Mascots Wallpaper Hack Apart a Highlighter to Create UV-Reactive Flowers [Science] Add a “Textmate Style” Lightweight Text Editor with Dropbox Syncing to Chrome and Iron

    Read the article

  • Back Up to Tape the Way You Shop For Groceries

    - by rickramsey
    Imagine if this was how you shopped for groceries: From the end of the aisle sprint to the point where you reach the ketchup. Pull a bottle from the shelf and yell at the top of your lungs, “Got it!” Sprint back to the end of the aisle. Start again and sprint down the same aisle to the mustard, pull a bottle from the shelf and again yell for the whole store to hear, “Got it!” Sprint back to the end of the aisle. Repeat this procedure for every item you need in the aisle. Proceed to the next aisle and follow the same steps for the list of items you need from that aisle. Sounds ridiculous, doesn’t it? Not only is it horribly inefficient, it’s exhausting and can lead to wear out failures on your grocery cart, or worse, yourself. This is essentially how NetApp and some other applications write NDMP backups to tape. In the analogy, the ketchup and mustard are the files to be written, yelling “Got it!” is the equivalent of a sync mark at the end of a file, and the sprint back to the end of an aisle is the process most commonly called a “backhitch” where the drive has to back up on a tape to start writing again. Writing to tape in this way results in very slow tape drive performance and imposes unnecessary wear on the tape drive and the media, especially when writing small files. The good news is not all tape drives behave this way when writing small files. Unlike midrange LTO drives, Oracle’s StorageTek T10000D tape drive is designed to handle this scenario efficiently. The difference between the two drive types is that the T10000D drive gives you the ability to write files in a NetApp NDMP backup environment the way you would normally shop for groceries. With grocery shopping, you essentially stream through aisles picking up items as you go, and then after checking out, yell, “Got it!”, though you might do that last step silently. With the T10000D, it has a feature called the Tape Application Accelerator, which prevents the drive from having to stop after each file is written to notify NetApp or another application that the write was successful. When enabled in the T10000D tape drive, Tape Application Accelerator causes the tape drive to respond to tape mark and file sync commands differently than when disabled: A tape mark received by the tape drive is treated as a buffered tape mark. A file sync received by the tape drive is treated as a no op command. Since buffered tape marks and no op commands do not cause the tape drive to empty the contents of its buffer to tape and backhitch, the data is written to tape in significantly less time. Oracle has emulated NetApp environments with a number of different file sizes and found the following when comparing the T10000D with the Tape Application Accelerator enabled versus LTO6 tape drives. Notice how the T10000D is not only monumentally faster, but also remarkably consistent? In addition, the writing of the 50 GB of files is done without a single backhitch. The LTO6 drive, meanwhile, will perform as many as 3,800 backhitches! At the end of writing the entire set of files, the T10000D tape drive reports back to the application, in this case NetApp, that the write was successful via a tape mark. So if the Tape Application Accelerator dramatically improves performance and reliability, why wouldn’t you always have it enabled? The reason is because tape drive buffers are meant to be just temporary data repositories so in the event of a power loss, there could be data loss in certain environments for the files that resided in the buffer. Fortunately, we do have best practices depending on your environment to avoid this from happening. I highly recommend reading Maximizing Tape Performance with StorageTek T10000 Tape Drives (pdf) to decide which best practice is right for you. The white paper also digs deeper into the benefits of the Tape Application Accelerator. The white paper is free, and after downloading it you can decide for yourself whether you want to yell “Got it!” out loud or just silently to yourself. Customer Advisory Panel One final link: Oracle has started up a Customer Advisory Panel program to collect feedback from customers on their current experiences with Oracle products, as well as desires for future product development. If you would like to participate in the program, go to this link at oracle.com. photo taken on Idaho's Sacajewea Historic Biway by Rick Ramsey - Brian Zents Follow OTN on Blog | Facebook | Twitter | YouTube

    Read the article

  • Don’t Miss The Top Exastack ISV Headlines – Week Of June 5

    - by Roxana Babiciu
    Kerridge achieves Oracle Exadata Optimized status with K8, an ERP Solution for distribution, merchant and wholesale/retail sectors. The online transactional processing saw a 12x increase in the volume throughput from previous benchmarks – Watch video. Accenture achieves Oracle Exalogic Optimized status with AFPO, a unique accelerator for customer-facing solutions. Over 125 clients cut their implementation costs by up to thirty percent – Read more.

    Read the article

  • Don’t Miss The Top Exastack ISV Headlines – Week Of May 26

    - by Roxana Babiciu
    Calypso Technology announced that Calypso version 14 has achieved Oracle Exadata Optimized status through OPN. In simulations of data-intensive straight through-processing tasks, Calypso achieved performance gains of up to 500% using Exadata hardware – Read more Infosys achieves Oracle SuperCluster Optimized status with Finacle, a core banking solution. Finacle can process 6x the volume of transactions currently processed by the entire US banking system – Read more

    Read the article

  • Killing Stuck Child JVM's

    - by ACShorten
    Note: This facility only applies to Oracle Utilities Application Framework products using COBOL. In some situations, the Child JVM's may spin. This causes multiple startup/shutdown Child JVM messages to be displayed and recursive child JVM's to be initiated and shunned. If the following: Unable to establish connection on port …. after waiting .. seconds.The issue can be caused intermittently by CPU spins in connection to the creation of new processes, specifically Child JVMs. Recursive (or double) invocation of the System.exit call in the remote JVM may be caused by a Process.destroy call that the parent JVM always issues when shunning a JVM. The issue may happen when the thread in the parent JVM that is responsible for the recycling gets stuck and it affects all child JVMs. If this issue occurs at your site then there are a number of options to address the issue: Configure an Operating System level kill command to force the Child JVM to be shunned when it becomes stuck. Configure a Process.destroy command to be used if the kill command is not configured or desired. Specify a time tolerance to detect stuck threads before issuing the Process.destroy or kill commands. Note: This facility is also used when the Parent JVM is also shutdown to ensure no zombie Child JVM's exit. The following additional settings must be added to the spl.properties for the Business Application Server to use this facility: spl.runtime.cobol.remote.kill.command – Specify the command to kill the Child JVM process. This can be a command or specify a script to execute to provide additional information. The kill.command property can accept two arguments, {pid} and {jvmNumber}, in the specified string. The arguments must be enclosed in curly braces as shown here. Note: The PID will be appended to the killcmd string, unless the {pid} and {jvmNumber} arguments are specified. The jvmNumber can be useful if passed to a script for logging purposes. Note: If a script is used it must be in the path and be executable by the OS user running the system. spl.runtime.cobol.remote.destroy.enabled – Specify whether to use the Process.destroy command instead of the kill command. Specify true or false. Default value is false. Note: Unless otherwise required, it is recommended to use the kill command option if shunning JVM's is an issue. There this value can remain its default value, false, unless otherwise required. spl.runtime.cobol.remote.kill.delaysecs – Specify the number of seconds to wait for the Child JVM to terminate naturally before issuing the Process.destroy or kill commands. Default is 10 seconds. For example: spl.runtime.cobol.remote.kill.command=kill -9 {pid} {jvmNumber}spl.runtime.cobol.remote.destroy.enabled=falsespl.runtime.cobol.remote.kill.delaysecs=10 When a Child JVM is to be recycled, these properties are inspected and the spl.runtime.cobol.remote.kill.command, executed if provided. This is done after waiting for spl.runtime.cobol.remote.kill.delaysecs seconds to give the JVM time to shut itself down. The spl.runtime.cobol.remote.destroy.enabled property must be set to true AND the spl.runtime.cobol.remote.kill.command omitted for the original Process.destroy command to be used on the process. Note: By default the spl.runtime.cobol.remote.destroy enabled is set to false and is therefore disabled. If neither spl.runtime.cobol.remote.kill.command nor spl.runtime.cobol.remote.destroy.enabled is specified, child JVMs will not beforcibly killed. They will be left to shut themselves down (which may lead to orphan JVMs). If both are specified, the spl.runtime.cobol.remote.kill.command is preferred and spl.runtime.cobol.remote.destroy.enabled defaulted to false.It is recommended to invoke a script to issue the direct kill command instead of directly using the kill -9 commands.For example, the following sample script ensures that the process Id is an active cobjrun process before issuing the kill command: forcequit.sh #!/bin/shTHETIME=`date +"%Y-%m-%d %H:%M:%S"`if [ "$1" = "" ]then  echo "$THETIME: Process Id is required" >>$SPLSYSTEMLOGS/forcequit.log  exit 1fijavaexec=cobjrunps e $1 | grep -c $javaexecif [ $? = 0 ]then  echo "$THETIME: Process $1 is an active $javaexec process -- issuing kill-9 $1" >>$SPLSYSTEMLOGS/forcequit.log  kill -9 $1exit 0else  echo "$THETIME: Process id $1 is not a $javaexec process or not active --  kill will not be issued" >>$SPLSYSTEMLOGS/forcequit.logexit 1fi This script's name would then be specified as the value for the spl.runtime.cobol.remote.kill.command property, for example: spl.runtime.cobol.remote.kill.command=forcequit.sh The forcequit script does not have any explicit parameters but pid is passed automatically. To use the jvmNumber parameter it must explicitly specified in the command. For example, to call script forcequit.sh and pass it the pid and the child JVM number, specify it as follows: spl.runtime.cobol.remote.kill.command=forcequit.sh {pid} {jvmNumber} The script can then use the JVM number for logging purposes or to further ensure that the correct pid is being killed.If the arguments are omitted, the pid is automatically appended to the spl.runtime.cobol.remote.kill.command string. To use this facility the following patches must be installed: Patch 13719584 for Oracle Utilities Application Framework V2.1, Patches 13684595 and 13634933 for Oracle Utilities Application Framework V2.2 Group Fix 4 (as Patch 13640668) for Oracle Utilities Application Framework V4.1.

    Read the article

  • Don’t Miss The Top Exastack ISV Headlines – Week Of June 5

    - by Roxana Babiciu
    Smartsoft's OCEAN Payment Processing Solution achieves Oracle Exadata Optimized status. "Performance is the most important issue for our success in the market and running OCEAN on the Oracle Exadata Database Machine provides customers with extreme performance.” – Learn more Banking solution FORBIS Ltd’s FORPOST achieves Oracle Exadata, Exalogic and SuperCluster Ready Status. “We are glad to offer our current and future customers the newest features provided by Oracle Engineered Systems to achieve maximum reliability and speed operation.” – Learn more

    Read the article

  • Ubuntu Server available updates

    - by Rapture
    In Ubuntu 11.04 Server when I would log in via ssh it would tell me how many packages are available for updating in the welcome message. After upgrading to 11.10 I no longer get that information. Is there a package I need to install or a config file that needs changing? 11.04 output: Welcome to Ubuntu 11.10 (GNU/Linux 3.0.0-12-generic x86_64) * Documentation: https://help.ubuntu.com/ 32 packages can be updated. 8 updates are security updates. Last login: Mon Nov 21 16:19:01 2011 from han-solo.local 11.10 output: Welcome to Ubuntu 11.10 (GNU/Linux 3.0.0-12-server x86_64) * Documentation: https://help.ubuntu.com/11.10/serverguide/C No mail. Last login: Tue Nov 22 19:07:19 2011 from han-solo.local

    Read the article

  • How can a gamepad control THE mouse?

    - by Boris
    There are many questions about this subject: Remapping both mouse and keyboard to a gamepad How do I configure a joystick or gamepad? How to control the mouse pointer via my keyboard? ... But the purpose of these questions/answers is to be able to use the gamepad for playing a game. I would a like a solution to use the gamepad to control THE mouse. To replace the mouse by the gamepad in all applications. That way I could control my computer in the living-room from my couch with a wireless gamepad.

    Read the article

  • Why is my external USB hard drive sometimes completely inaccessible?

    - by Eliah Kagan
    I have an external USB hard drive, consisting of an 1 TB SATA drive in a Rosewill RX35-AT-SU SLV Aluminum 3.5" Silver USB 2.0 External Enclosure, plugged into my SONY VAIO VGN-NS310F laptop. It is plugged directly into the computer (not through a hub). The drive inside the enclosure is a 7200 rpm Western Digital, but I don't remember the exact model. I can remove the drive from the enclosure (again), if people think it's necessary to know that detail. The drive is formatted ext4. I mount it dynamically with udisks on my Lubuntu 11.10 system, usually automatically via PCManFM. (I have had Lubuntu 12.04 on this machine, and experienced all this same behavior with that too.) Every once in a while--once or twice a day--it becomes inaccessible, and difficult to unmount. Attempting to unmount it with sudo umount ... gives an error message saying the drive is in use and suggesting fuser and lsof to find out what is using it. Killing processes found to be using the drive with fuser and lsof is sometimes sufficient to let me unmount it, but usually isn't. Once the drive is unmounted or the machine is rebooted, the drive will not mount. Plugging in the drive and turning it on registers nothing on the computer. dmesg is unchanged. The drive's access light usually blinks vigorously, as though the drive is being accessed constantly. Then eventually, after I keep the drive off for a while (half an hour), I am able to mount it again. While the drive doesn't work on this machine for a while, it will work immediately on another machine running the same version of Ubuntu. Sometimes bringing it back over from the other machine seems to "fix" it. Sometimes it doesn't. The drive doesn't always stop being accessible while mounted, before becoming unmountable. Sometimes it works fine, I turn off the computer, I turn the computer back on, and I cannot mount the drive. Currently this is the only drive with which I have this problem, but I've had problems that I think are the same as this, with different drives, on different Ubuntu machines. This laptop has another external USB drive plugged into it regularly, which doesn't have this problem. Unplugging that drive before plugging in the "problem" drive doesn't fix the problem. I've opened the drive up and made sure the connections were tight in the past, and that didn't seem to help (any more than waiting the same amount of time that it took to open and close the drive, before attempting to remount it). Does anyone have any ideas about what could be causing this, what troubleshooting steps I should perform, and/or how I could fix this problem altogether? Update: I tried replacing the USB data cable (from the enclosure to the laptop), as Merlin suggested. I should've tried that long ago, since it fits the symptoms perfectly (the drive works on another machine, which would make sense because the cable would be bent at a different angle, possibly completing a circuit of frayed wires). Unfortunately, though, this did not help--I have the same problem with the new cable. I'll try to provide additional detailed information about the drive inside the enclosure, next time I'm able to get the drive working. (At the moment I don't have another machine available to attach it.) Major Update (28 June 2012) The drive seems to have deteriorated considerably. I think this is so, because I've attached it to another machine and gotten lots of errors about invalid characters, when copying files from it. I am less interested in recovering data from the drive than I am in figuring out what is wrong with it. I specifically want to figure out if the problem is the drive or the enclosure. Now, when I plug the drive into the original machine where I was having the problems, it still doesn't appear (including with sudo fdisk -l), but it is recognized by the kernel and messages are added to dmesg. Most of the message consist of errors like this, repeated many times: [ 7.707593] sd 5:0:0:0: [sdc] Unhandled sense code [ 7.707599] sd 5:0:0:0: [sdc] Result: hostbyte=invalid driverbyte=DRIVER_SENSE [ 7.707606] sd 5:0:0:0: [sdc] Sense Key : Medium Error [current] [ 7.707614] sd 5:0:0:0: [sdc] Add. Sense: Unrecovered read error [ 7.707621] sd 5:0:0:0: [sdc] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00 [ 7.707636] end_request: critical target error, dev sdc, sector 0 [ 7.707641] Buffer I/O error on device sdc, logical block 0 Here are all the lines from dmesg starting with when the drive is recognized. Please note that: I'm back to running Lubuntu 12.04 on this machine (and perhaps that's a factor in better error messages). Now that the drive has been plugged into another machine and back into this one, and also now that this machine is back to running 12.04, the drive's access light doesn't blink as I had described. Looking at the drive, it would appear as though it is working normally, with low or no access. This behavior (the errors) occurs when rebooting the machine with the drive plugged in, and also when manually plugging in the drive. A few of the messages are about /dev/sdb. That drive is working fine. The bad drive is /dev/sdc. I just didn't want to edit anything out from the middle.

    Read the article

  • unable to format usb 1204 [daemon inhibited ]

    - by santosamaru
    i try to format my usb 1st time its work all data gone but i can't save any file at this usb . then i try to check is it working or broken here the report santos@santos:~$ sudo badblocks -v /dev/sdb [sudo] password for santos: Sorry, try again. [sudo] password for santos: Checking blocks 0 to 7824383 Checking for bad blocks (read-only test): 0.00% done, 0:00 elapsed. (0/0/0 errdone Pass completed, 0 bad blocks found. (0/0/0 errors) santos@santos:~$ sudo badblocks -v -w /dev/sdb [sudo] password for santos: Sorry, try again. [sudo] password for santos: /dev/sdb is apparently in use by the system; it's not safe to run badblocks! santos@santos:~$ how to format and fix this issues? i have read this link Formatting Pen Drive causes 'Daemon Is Inhibited' Error and it said like this when i try to move any items from desktop " the destination is read only also in this case i use google and find this http://ubuntuforums.org/showthread.php?t=1955353 article as same its not helped following user13509 suggestion ..

    Read the article

  • What is the best strategy for transforming unicode strings into filenames?

    - by David Cowden
    I have a bunch (thousands) of resources in an RDF/XML file. I am writing a certain subset of the resources to files -- one file for each, and I'm using the resource's title property as the file name. However, the titles are every day article, website, and blog post titles, so they contain characters unsafe for a URI (the necessary step for constructing a valid file path). I know of the Jersey UriBuilder but I can't quite get it to work for my needs as I detailed in a different question on SO. Some possibilities I have considered are: Since each resource should also have an associated URL, I could try to use the name of the file on the server. The down side of this is sometimes people don't name their content logically and I think the title of an article better reflects the content that will be in each text file. Construct a white list of valid characters and parse the string myself defining substitutions for unsafe characters. The downside of this is the result could be just as unreadable as the former solution because presumably the content creators went through a similar process when placing the files on their server. Choose a more generic naming scheme, place the title in the text file along with the other attributes, and tell my boss to live with it. So my question here is, what methods work well for dealing with a scenario where you need to construct file names out of strings with potentially unsafe characters? Is there a solution that better fills out my constraints?

    Read the article

  • 99 Life Hacks to Make Your Life Easier!

    - by Asian Angel
    We have featured some awesome life hacks, tips, and tricks here before on HTG ETC, but today we are back with a super compilation full of geeky ingenuity! Get ready to increase your problem solving repertoire with this terrific collection of 99 life hacks. 99 Life Hacks to make your life easier! [via BoingBoing] How Hackers Can Disguise Malicious Programs With Fake File Extensions Can Dust Actually Damage My Computer? What To Do If You Get a Virus on Your Computer

    Read the article

  • SQL File Layout Viewer 1.2

    - by merrillaldrich
    Just ahead of presenting it at SQL Saturday in my home town of Minneapolis / Saint Paul, I’m happy to release an updated version of the SQL Server File Layout Viewer. This is a utility I released back in March for inspecting the arrangement of data pages in SQL Server files. If you will be in Minneapolis this Saturday (space permitting), please come out and see this tool in action! New Features Based on feedback from others in the SQL Server community, I made these enhancements: Page types now provide...(read more)

    Read the article

  • Caching large amount of ajax returned objects

    - by ofcapl
    I'm building an application which fetches large amount of items with ajax requests via other application API. It returns me 6k - 30k js objects which are used multiple times across various application views (sorting, filtering etc.). I would like to avoid querying API every time for such big list so I decided to cache this data somehow. I was thinking about various solutions: saving it to localstorage, using some caching library (e.g. locachejs), storing in js var. I'm not an expert so I would like to hear Your suggestions about each (or one of these) solution, about its pros and cons. Every help will be very appreciated.

    Read the article

  • Disk space suddenly 100% used?

    - by dannymcc
    I'm trying to identify why, suddenly, 100% of our disk space is in use. I have already rebooted but the issue persists. Here are the outputs of some commands that are showing some strange (for me) results: danny@hydrogen:~$ df -h Filesystem Size Used Avail Use% Mounted on /dev/cciss/c0d0p1 130G 122G 949M 100% / none 1.9G 196K 1.9G 1% /dev none 2.0G 0 2.0G 0% /dev/shm none 2.0G 40K 2.0G 1% /var/run none 2.0G 0 2.0G 0% /var/lock none 2.0G 0 2.0G 0% /lib/init/rw danny@hydrogen:/$ sudo du -chs / du: cannot access `/proc/1662/task/1662/fd/4': No such file or directory du: cannot access `/proc/1662/task/1662/fdinfo/4': No such file or directory du: cannot access `/proc/1662/fd/4': No such file or directory du: cannot access `/proc/1662/fdinfo/4': No such file or directory danny@hydrogen:/$ df Filesystem 1K-blocks Used Available Use% Mounted on /dev/cciss/c0d0p1 135342296 128144108 323104 100% / none 1991336 196 1991140 1% /dev none 1995788 0 1995788 0% /dev/shm none 1995788 40 1995748 1% /var/run none 1995788 0 1995788 0% /var/lock none 1995788 0 1995788 0% /lib/init/rw danny@hydrogen:/$ mount /dev/cciss/c0d0p1 on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) none on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) none on /dev type devtmpfs (rw,mode=0755) none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) none on /dev/shm type tmpfs (rw,nosuid,nodev) none on /var/run type tmpfs (rw,nosuid,mode=0755) none on /var/lock type tmpfs (rw,noexec,nosuid,nodev) none on /lib/init/rw type tmpfs (rw,nosuid,mode=0755) danny@hydrogen:/$ sudo du -h --max-depth=1 634M ./premvet_sync 5.6M ./etc 4.0K ./opt 16K ./lost+found 7.4M ./bin 623M ./lib 196K ./dev 0 ./sys 4.0K ./srv 4.0K ./cdrom 8.0K ./media 52K ./tmp ... it hangs for ages here..... The server is running Ubuntu 10.04.4 LTS. System load: 2.85 Temperature: 8 C Usage of /: 94.7% of 129.07GB Processes: 132 Memory usage: 39% Users logged in: 0 Swap usage: 0% IP address for eth0: 192.168.1.124 => / is using 94.7% of 129.07GB I'm struggling to comprehend why this is happening! Any pointers would be appreciated.

    Read the article

  • cPanel's Web Disk - Security issues?

    - by Tim Sparks
    I'm thinking of using Web Disk (built into the later versions of cPanel) to allow a Windows or Mac computer to map a network drive that is actually a folder on our website (above the public_html folder). We currently use an antiquated local server to store information, but it is only accessible from one location - we would like to be able to access it from other locations as well. I understand that folders above public_html are not accessible via http, but I want to know how secure is the access to these folders as a network drive? There is potentially sensitive information that we need to decide whether it is appropriate to store here. The map network drive option seems to work well as it behaves as if the files are on your own computer (i.e. you can open and save files without then having to upload them - as it happens automatically). We have used Dropbox for similar purposes, but space is a issue with them, as is accountability and so we haven't used it for sensitive information. Are there any notable security concerns with using Web Disk as a secure file server?

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >