Search Results

Search found 45804 results on 1833 pages for 'large files'.

Page 500/1833 | < Previous Page | 496 497 498 499 500 501 502 503 504 505 506 507  | Next Page >

  • autocomplete not working on one sever, works on others

    - by dogmatic69
    I have Ubuntu 10.10 x64 and x86 running on various servers and auto complete works on all of them bar one. The issue: apt-<tab> would show a list of options but sudo apt-<tab> would not. After fiddling with it for a few hours i've found that /etc/bash_autocomplete did not exist. on the broken server. Copying the one from a working one it now works. but still not properly. sudo apt-get ins<tab> does not show do anything. listing the files in /etc/bash_autocomplete.d/ on the working server has about 50 files, and the broken one only two or three. i dont think that i can just copy these files though as it might show commands for things that are not even installed. TL;DR autocomplete broken, how can i fix it. Seems like its disabled somewhere, why is this EDIT: Ok, it was not ever installed... $ sudo apt-get install bash-completion Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed bash-completion 0 upgraded, 1 newly installed, 0 to remove and 3 not upgraded. Need to get 140kB of archives. After this operation, 1,061kB of additional disk space will be used. Get:1 http://archive.ubuntu.com/ubuntu/ maverick-updates/main bash-completion all 1:1.2-2ubuntu1.1 [140kB] Fetched 140kB in 0s (174kB/s) Selecting previously deselected package bash-completion. (Reading database ... 23808 files and directories currently installed.) Unpacking bash-completion (from .../bash-completion_1%3a1.2-2ubuntu1.1_all.deb) ... Processing triggers for man-db ... Setting up bash-completion (1:1.2-2ubuntu1.1) ... its now kinda working, but still wonky... apt-get ins<tab> gives sudo apt-get insserv as the option. also apt-get install php5<tab> gives apt-get install php5/ not php5-* options.

    Read the article

  • APC (PHP Cache) Uptime 0 minutes, not caching

    - by Jussi
    My goal is to implement APC for opcode cache for a drupal 6 production site. I have so far tested APC with several php files with and without including other php files with include_once. Also tried to tweak the apc.ini values for shm_size, apc.include_once_override and apc.stat. Restarted apache every time. Resulting in apc.php not showing any changes in any values. (except of course the changed apc.ini values are shown as they should) Every time i refresh the apc.php test page, the start time resets as the current time showing uptime 0 minutes. apc.php -testpage shows: General Cache InformationAPC Version 3.1.9 PHP Version 5.2.10 APC Host xxxx.xx.xx Server Software Apache/2.2.3 (CentOS) Shared Memory 1 Segment(s) with 128.0 MBytes (mmap memory, pthread mutex Locks locking) Start Time 2011/07/26 11:53:56 Uptime 0 minutes File Upload Support 1 Cached Files 0 ( 0.0 Bytes) Hits 1 Misses 1 Request Rate (hits, misses) 2.00 cache requests/second Hit Rate 1.00 cache requests/second Miss Rate 1.00 cache requests/second Insert Rate 0.00 cache requests/second Cache full count 0 Cached Variables 0 ( 0.0 Bytes) Hits 0 Misses 0 Request Rate (hits, misses) 0.00 cache requests/second Hit Rate 0.00 cache requests/second Miss Rate 0.00 cache requests/second Insert Rate 0.00 cache requests/second Cache full count 0 apc.cache_by_default 1 apc.canonicalize 1 apc.coredump_unmap 0 apc.enable_cli 0 apc.enabled 1 apc.file_md5 0 apc.file_update_protection 2 apc.filters apc.gc_ttl 3600 apc.include_once_override 0 apc.lazy_classes 0 apc.lazy_functions 0 apc.max_file_size 16 apc.mmap_file_mask /tmp/apcphp5.095eRm apc.num_files_hint 1024 apc.preload_path apc.report_autofilter 0 apc.rfc1867 0 apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.serializer default apc.shm_segments 1 apc.shm_size 128M apc.slam_defense 0 apc.stat 0 apc.stat_ctime 0 apc.ttl 7200 apc.use_request_time 1 apc.user_entries_hint 4096 apc.user_ttl 7200 apc.write_lock 1 Host Status Diagrams: Free: 128.0 MBytes (100.0%) Hits: 1 (50.0%) Used: 20.3 KBytes (0.0%) Misses: 1 (50.0%) Detailed Memory Usage and Fragmentation: Fragmentation: 0% phpinfo shows: Server API CGI/FastCGI APC: Version 3.1.9 APC Debugging Enabled MMAP Support Enabled MMAP File Mask /tmp/apcphp5.JkKDk7 Locking type pthread mutex Locks Serialization Support php Revision $Revision: 308812 $ Build Date Jul 21 2011 14:31:12 I followed these steps to find if suexec settings would prevent caching: http://www.litespeedtech.com/support/forum/showthread.php?t=4189 [root@host /]# ps -ef|grep lsphp root 20402 17833 0 11:21 pts/0 00:00:00 grep lsphp [root@host /]# ps -waux root 17833 0.0 0.1 5004 1484 pts/0 S 10:39 0:00 bash ..indicates that there is no lsphp running on the host also I read the following article and comments, concluding that in my case the problem is not the suexec as the user apache is the httpd process owner http://www.brandonturner.net/blog/2009/07/fastcgi_with_php_opcode_cache/ also suexec command is not recognized when logged and launced as root @ host also i'm almost confident that there is no cPanel running on the host to check if a setting there would reset the running cache process at some interval This leaves me with few clues where to head next. I tried to set (with chown and chgrp) apache as the owner of the apc.php file and some test php files resulting in 500 server error. Is there a way to check if the file permissions prevent the apc stay running? I'm tremendously grateful for any suggestions or help.

    Read the article

  • Batch file script to remove special characters from filenames (Windows)

    - by njreed.myopenid.com
    I have a large set of files, some of which contain special characters in the filename (e.g. ä,ö,%, and others). I'd like a script file to iterate over these files and rename them removing the special characters. I don't really mind what it does, but it could replace them with underscores for example e.g. Störung%20.doc would be renamed to St_rung_20.doc In order of preference: A DOS batch file A Windows script file to run with cscript (vbs) A third party piece of software that can be run from the command-line (i.e. no user interaction required) Another language script file, for which I'd have to install an additional script engine Background: I'm trying to encrypt these file with GnuPG on Windows but it doesn't seem to handle special characters in filenames with the --encrypt-files option.

    Read the article

  • Why does Perl's readdir() cache directory entries?

    - by Frank Straetz
    For some reason Perl keeps caching the directory entries I'm trying to read using readdir: opendir(SNIPPETS, $dir_snippets); # or die... while ( my $snippet = readdir(SNIPPETS) ) { print ">>>".$snippet."\n"; } closedir(SNIPPETS); Since my directory contains two files, test.pl and test.man, I'm expecting the following output: . .. test.pl test.man Unfortunately Perl returns a lot of files that have since vanished, for example because I tried to rename them. After I move test.pl to test.yeah Perl will return the following list: . .. test.pl test.yeah test.man What's the reason for this strange behaviour? The documentation for opendir, readdir and closedir doesn't mention some sort of caching mechanism. "ls -l" clearly lists only two files.

    Read the article

  • Auditing events 4656 and 4658 on Windows folder on Server 2008

    - by PCurd
    During an overnight system state backup we are seeing thousands of success audit events (4656, 4658) on the folder c:\windows\servicing, system32 and others in the windows folder. We use file success auditing on some files so I can't disable it but this deluge is filling up the logs and making reporting tricky. What is the harm of changing the auditing settings on the windows folder? What are the recommended settings to put on the files for those people doing system state backups? Thanks,

    Read the article

  • An existing connection was forcibly closed by the remote host

    - by George
    I have a fat VB.NET Winform client that is using the an old asmx style web service. Very often, when I perform query that takes a while, I get the subject error. The error happenes The error seems to occur in < 1 min, which is far less that the web service timeout value that I have set or the timeout value on the ADO Command object that is performing the query within the web server. It seems to occur whenever I am performing a large query that expects to return a lot of rows or when I am sending up a large amount of data to the web service. For example, it just occurred when I was passing a large dataset to the web server: System.Net.WebException: The underlying connection was closed: An unexpected error occurred on a receive. ---> System.IO.IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. ---> System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags) at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) --- End of inner exception stack trace --- at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.Connection.SyncRead(HttpWebRequest request, Boolean userRetrievedStream, Boolean probeRead) --- End of inner exception stack trace --- at System.Web.Services.Protocols.WebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.HttpWebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at Smit.Pipeline.Bo.localhost.WsSR.SaveOptions(String emailId, DataSet dsNeighborhood, DataSet dsOption, DataSet dsTaskApplications, DataSet dsCcUsers, DataSet dsDistinctUsers, DataSet dsReferencedApplications) in C:\My\Code\Pipeline2\Smit.Pipeline.Bo\Web References\localhost\Reference.vb:line 944 at Smit.Pipeline.Bo.Options.Save(TaskApplications updatedTaskApplications) in I've been looking a tons of postings on this error and it is surprising at how varied the circumstances which cause this error are. I've tried messing with Wireshark, but I am clueless how to use it. This application only has about 20 users at any one time and I am able to reproduce this error in the middle of the night when probably no one is using the app, so I don't think that the number of requests to the web server or to the database is high. It's probably one right now when I just got the error now. It seems to have to do everything with the amt of data being passed in either direction. This error is really chronic and killing me. Please help.

    Read the article

  • web.config, configSource, and "The 'xxx' element is not declared" warning.

    - by UpTheCreek
    I have broken down the horribly unwieldy web.config file into individual files for some of the sections (e.g. connectionStrings, authentication, pages etc.) using the configSource attribute. This is working file, but the individual xml files that hold the section 'snippets' cause warnings in VS. For example, a file named roleManager.config is used for the role manager section, and looks like this: <roleManager enabled="false"> </rolemanager> However I get a blue squiggle under the roleManager element in VS, and the following warning: The 'roleManager' element is not declared I guess this is something to do with valid xml and schemas etc. Is there an easy way to fix this? Something I can add to the individual files? Thanks P.S. I have heard that it is bad practice to break the web.config file out like this. But don't really understand why - can anyone illuminate me?

    Read the article

  • Add Command prompt in VS 2008 Express Edition manually

    - by Kumar
    Hi all, To add the Command prompt in VS 2008 express edition, i have done the following steps: Tools-ExternalTools-Click on Add- Then I have entered the following information. Title: Visual Studio 2008 Command Prompt Command: cmd.exe Arguments: %comspec% /k ""C:\Program Files\Microsoft Visual Studio 9.0\VC\vcvarsall.bat"" x86 Initial Directory: $(ProjectDir) Then OK/Apply: After this when I went to Tools Menu and click on Visual Studio 2008 Command Prompt, command prompt open but got the following error message: '"C:\Program Files\Microsoft Visual Studio 9.0\VC\vcvarsall.bat"' is not recognized as an internal or external command, operable program or batch file. C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE Please somebody help me to fix this problem.. Or somebody teach me freshly how to add command prompt in Tools Menu manually in VS 2008 Express Edition. Thanks, Kumar

    Read the article

  • script to recursively check for and select dependencies

    - by rp.sullivan
    I have written a script that does this but it is one of my first scripts ever so i am sure there is a better way:) Let me know how you would go about doing this. I'm looking for a simple yet efficient way to do this. Here is some important background info: ( It might be a little confusing but hopefully by the end it will make sense. ) 1) This image shows the structure/location of the relevant dirs and files. 2) The packages.file located at ./config/default/config/packages is a space delimited file. field5 is the "package name" which i will call $a for explanations sake. field4 is the name of the dir containing the $a.dir i will call $b field1 shows if the package is selected or not, "X"(capital x) for selected and "O"(capital o as in orange) for not selected. Here is an example of what the packages.file might contain: ... X ---3------ 104.800 database gdbm 1.8.3 / base/library CROSS 0 O -1---5---- 105.000 base libiconv 1.13.1 / base/tool CROSS 0 X 01---5---- 105.000 base pkgconfig 0.25 / base/tool CROSS 0 X -1-3------ 105.000 base texinfo 4.13a / base/tool CROSS DIETLIBC 0 O -----5---- 105.000 develop duma 2_5_15 / base/development CROSS NOPARALLEL 0 O -----5---- 105.000 develop electricfence 2_4_13 / base/development CROSS 0 O -----5---- 105.000 develop gnupth 2.0.7 / extra/development CROSS NOPARALLEL FPIC-QUIRK 0 ... 3) For almost every package listed in the "packages.file" there is a corresponding ".cache file" The .cache file for package $a would be located at ./package/$b/$a/$a.cache The .cache files contain a list of dependencies for that particular package. Here is an example of one of the .cache files might look like. Note that the dependencies are field2 of lines containing "[DEP]" These dependencies are all names of packages in the "package.file" [TIMESTAMP] 1134178701 Sat Dec 10 02:38:21 2005 [BUILDTIME] 295 (9) [SIZE] 11.64 MB, 191 files [DEP] 00-dirtree [DEP] bash [DEP] binutils [DEP] bzip2 [DEP] cf [DEP] coreutils ... So with all that in mind... I'm looking for a shell script that: From within the "main dir" Looks at the ./config/default/config/packages file and finds the "selected" packages and reads the corresponding .cache Then compiles a list of dependencies that excludes the already selected packages Then selects the dependencies (by changing field1 to X) in the ./config/default/config/packages file and repeats until all the dependencies are met Note: The script will ultimately end up in the "scripts dir" and be called from the "main dir". If this is not clear let me know what need clarification. For those interested I'm playing around with T2 SDE. If you are into playing around with linux it might be worth taking a look.

    Read the article

  • Logging Virus Definition Updates for MS Security Essentials in The Security Event Log

    - by Steve
    I would like to log a security in event in Windows 7 whenever the Microsoft Security Essentials 2 virus definition files are updates, deleted, or changed. I was expecting to do this with an Audit setting on one of the MS Security Essentials folders but I wasn't sure which one and how to avoid getting swamped with messages. What folder or files should I audit to track definition updates (or corruption) in the security events or is there a better approach?

    Read the article

  • chkdsk, SeaTools, and "does not have enough space to replace bad clusters"

    - by Zian Choy
    When I tried to do a Windows Vista Complete PC Backup, I received an error message that blathered about bad sectors. Then, when I ran chkdsk /r on the destination drive, this is what I got: C:\Windows\system32>chkdsk /R E: The type of the file system is NTFS. Volume label is Desktop Backup. CHKDSK is verifying files (stage 1 of 5)... 822016 file records processed. File verification completed. 1 large file records processed. 0 bad file records processed. 0 EA records processed. 0 reparse records processed. CHKDSK is verifying indexes (stage 2 of 5)... 848938 index entries processed. Index verification completed. 0 unindexed files processed. CHKDSK is verifying security descriptors (stage 3 of 5)... 822016 security descriptors processed. Security descriptor verification completed. 13461 data files processed. CHKDSK is verifying file data (stage 4 of 5)... The disk does not have enough space to replace bad clusters detected in file 239649 of name . The disk does not have enough space to replace bad clusters detected in file 239650 of name . The disk does not have enough space to replace bad clusters detected in file 239651 of name . An unspecified error occurred.f 822000 files processed) Yet, when I ran the SeaTools short & long generic tests on the Seagate disk, I didn't receive any errors. I know that I could reformat the disk and try running chkdsk /r again but I'd prefer to avoid waiting 4 hours in the hope that the problem was magically fixed. On the other hand, if I RmA the drive to Seagate, I have no SeaTools error number to use and they may claim that the drive is just fine. What should I try to do next? Side frustration: There is plenty of free hard drive space. The E: partition has 182 GB free.

    Read the article

  • how to stop homegroup sharing folders?

    - by srisar
    hi, i have homegroup on my pc and laptop, both running windows 7 , i can share the folders & files easily, but the problem is i cant stop sharing the folder. even i went to computer manage and stop sharing from there, but inside the homegroup the "stopped" share files are still showing. but now i cant open them because its showing the network resource is unavailable. but still the folders are showing how to hide them?

    Read the article

  • Are there any working implementations of the rolling hash function used in the Rabin-Karp string sea

    - by c14ppy
    I'm looking to use a rolling hash function so I can take hashes of n-grams of a very large string. For example: "stackoverflow", broken up into 5 grams would be: "stack", "tacko", "ackov", "ckove", "kover", "overf", "verfl", "erflo", "rflow" This is ideal for a rolling hash function because after I calculate the first n-gram hash, the following ones are relatively cheap to calculate because I simply have to drop the first letter of the first hash and add the new last letter of the second hash. I know that in general this hash function is generated as: H = c1ak - 1 + c2ak - 2 + c3ak - 3 + ... + cka0 where a is a constant and c1,...,ck are the input characters. If you follow this link on the Rabin-Karp string search algorithm , it states that "a" is usually some large prime. I want my hashes to be stored in 32 bit integers, so how large of a prime should "a" be, such that I don't overflow my integer? Does there exist an existing implementation of this hash function somewhere that I could already use? Here is an implementation I created: public class hash2 { public int prime = 101; public int hash(String text) { int hash = 0; for(int i = 0; i < text.length(); i++) { char c = text.charAt(i); hash += c * (int) (Math.pow(prime, text.length() - 1 - i)); } return hash; } public int rollHash(int previousHash, String previousText, String currentText) { char firstChar = previousText.charAt(0); char lastChar = currentText.charAt(currentText.length() - 1); int firstCharHash = firstChar * (int) (Math.pow(prime, previousText.length() - 1)); int hash = (previousHash - firstCharHash) * prime + lastChar; return hash; } public static void main(String[] args) { hash2 hashify = new hash2(); int firstHash = hashify.hash("mydog"); System.out.println(firstHash); System.out.println(hashify.hash("ydogr")); System.out.println(hashify.rollHash(firstHash, "mydog", "ydogr")); } } I'm using 101 as my prime. Does it matter if my hashes will overflow? I think this is desirable but I'm not sure. Does this seem like the right way to go about this?

    Read the article

  • Does anyone know of a library in Java that can parse ESRI Shapefiles?

    - by KingNestor
    I'm interested in writing a visualization program for the road data in the 2009 Tiger/Line Shapefiles. I'd like to draw the line data to display all the roads for my county. The ESRI Shapefile or simply a shapefile is a popular geospatial vector data format for geographic information systems software. It is developed and regulated by ESRI as a (mostly) open specification for data interoperability among ESRI and other software products.1 A "shapefile" commonly refers to a collection of files with ".shp", ".shx", ".dbf", and other extensions on a common prefix name (e.g., "lakes.*"). The actual shapefile relates specifically to files with the ".shp" extension, however this file alone is incomplete for distribution, as the other supporting files are required. Does anyone know of existing libraries for parsing and reading in the line data for Shapefiles?

    Read the article

  • algorithm to use to return a specific range of nodes in a directed graph

    - by GatesReign
    I have a class Graph with two lists types namely nodes and edges I have a function List<int> GetNodesInRange(Graph graph, int Range) when I get these parameters I need an algorithm that will go through the graph and return the list of nodes only as deep (the level) as the range. The algorithm should be able to accommodate large number of nodes and large ranges. Atop this, should I use a similar function List<int> GetNodesInRange(Graph graph, int Range, int selected) I want to be able to search outwards from it, to the number of nodes outwards (range) specified. So in the first function, I expect it to return the nodes placed in the blue box. The other function, if I pass the nodes as in the graph with a range of 1 and it starts at node 5, I want it to return the list of nodes that satisfy this criteria (placed in the orange box)

    Read the article

  • How to mount an Oracle database to new instance?

    - by Vimvq1987
    I have an instance of Oracle 10g R2 installed on Windows Server 2003. This instance was running an database, which does not have any backup. Now the OS went down, and could not repaired, all I got is the running files of the old instance. How can I restore the database from these files to new instance? A step-by-step guide will be much appreciated because I'm new with Oracle. Thank you very much

    Read the article

  • How to add a whole directory or project output to WiX package

    - by Coincoin
    We decided to switch from VS integrated setup to WiX. However, what we currently do is use projects output files as the input for the setup project. This lets us easily add Application Files to a directory (for images, samples, and other resources...) and those files are automatically added to the setup when we build. I could not find any similar feature in WiX. WiX seems to require one Directory entry and one File entry for each and every directory and file. This would require us to change the WiX source everytime a file is added which, to my eyes, is prohibitive since we have so many of them. Is there any integrated way of doing that with WiX or do I have to write my own task that will create a WiX source before calling candle?

    Read the article

  • Java - When to use Iterators?

    - by Walter White
    Hi all, I am trying to better understand when I should and should not use Iterators. To me, whenever I have a potentially large amount of data to iterate through, I write an Iterator for it. If it also lends itself to the Iterator interface, then it seems like a win. I was reading a little bit that there is a lot of overhead with using an Iterator. A good example of where I used an Iterator was to iterate through a bunch of SQL scripts to execute one query at a time, reading it in, then executing it. Is there another performance trade off I should be aware of? Before I used iterators, I would read the entire String of SQL commands to execute into an ArrayList, and the iterate through that. If the import is rather large (like for geolocation data, then the server tends to get bogged down). Walter

    Read the article

  • Javascript upload in spite of same origin policy

    - by Brian M. Hunt
    Can one upload files to a domain other than the domain a script originates from? For example, suppose you're hosting files on www.example.com, and you want to upload files to uploads.example.com, would the following script violate the same origin policy (using uploadify): <!-- from http://www.example.com/upload.html --> <input id="fileInput" name="fileInput" type="file" /> <script type="text/javascript">// <![CDATA[ $(document).ready(function() { $('#fileInput').uploadify({ 'uploader' : 'uploadify.swf', 'script' : 'http://upload.example.com/uploadify.php', 'cancelImg' : 'cancel.png', 'auto' : true, 'folder' : '/uploads' }); }); // ]]></script> I haven't seen a clear reference on the same origin policy that would indicate one-way or the other whether the this would be an issue with any browsers.

    Read the article

  • What's the fastest filesystem for developer builds?

    - by Dan Fabulich
    I'm putting together a Linux box that will act as a continuous integration build server; we'll mostly build Java stuff, but I think this question applies to any compiled language. What filesystem and configuration settings should I use? (For example, I know I won't need atime for this!) The build server will spend a lot of time reading and writing small files, and scanning directories to see which files have been modified.

    Read the article

  • Using XMLDecoder to cast Encoded XML to List<>

    - by Ender
    I am writing an application that reads in a large number of basic user details in the following format; once read in it then allows the user to search for a user's details using their email: NAME ROLE EMAIL --------------------------------------------------- Joe Bloggs Manager [email protected] John Smith Consultant [email protected] Alan Wright Tester [email protected] ... The problem I am suffering is that I need to store a large number of details of all people that have worked at the company. The file containing these details will be written on a yearly basis simply for reporting purposes, but the program will need to be able to access these details quickly. The way I aim to access these files is to have a program that asks the user for the name of the unique email of the member of staff and for the program to then return the name and the role from that line of the file. I've played around with text files, but am struggling with how I would handle multiple columns of data when it comes to searching this large file. What is the best format to store such data in? A text file? XML? The size doesn't bother me, but I'd like to be able to search it as quickly as possible. The file will need to contain a lot of entries, probably over the 10K mark over time. EDIT: I've decided to go with the XML serialisation method. I've managed to get the code for Encoding working perfectly, but the Decoding code below does not work. XMLDecoder d = new XMLDecoder( new BufferedInputStream(new FileInputStream("data.xml"))); List<Employee> list = (List<Employee>) d.readObject(); d.close(); for(Employee x : list) { if(x.getEmail().equals(userInput)) { // do stuff } } When the program hits List<Employee> list = (List<Employee>) d.readObject(); an exception is thrown claiming that "Employee cannot be cast to java.util.List". I've added a bounty to this and anyone that can help me solve this problem once and for all will get lots of lovely points. EDIT 2: I've looked a bit more into the problem and have come across Serialization as a potential answer. If anyone can look into this for me as I've no experience with Serialization or Deserialization I'd be very grateful. It can provide an Object with no problems whatsoever, but I really need to return it in the same format as it went in (List). EDIT 3: Ugh, this problem is really starting to drive me crazy and to be honest I'm starting to think that it's an unsolvable problem. If possible, could someone take a look at the code and help provide a solution for me?

    Read the article

  • How do I access a hard drive from a computer where windows is deleted?

    - by Intredasting
    Here is the current situation: My cousin deleted Windows from his hard drive (yeah, don't ask...). His hard drive still has about 200 GB of files on it that he may want to recover before we format the hard drive and reinstall Windows 7 to it. Is there a way I can create a bootable CD from some utility that will allow me to access the files on the hard drive, and copy it to a flash drive? What's the best utility for that?

    Read the article

< Previous Page | 496 497 498 499 500 501 502 503 504 505 506 507  | Next Page >