Search Results

Search found 9105 results on 365 pages for 'disk quota'.

Page 153/365 | < Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >

  • OpenCV to use in memory buffers or file pointers

    - by The Unknown
    The two functions in openCV cvLoadImage and cvSaveImage accept file path's as arguments. For example, when saving a image it's cvSaveImage("/tmp/output.jpg", dstIpl) and it writes on the disk. Is there any way to feed this a buffer already in memory? So instead of a disk write, the output image will be in memory. I would also like to know this for both cvSaveImage and cvLoadImage (read and write to memory buffers). Thanks! My goal is to store the Encoded (jpeg) version of the file in Memory. Same goes to cvLoadImage, I want to load a jpeg that's in memory in to the IplImage format.

    Read the article

  • Java object caching, which is faster, reading from a file or from a remote machine?

    - by Kumar225
    I am at a point where I need to take the decision on what to do when caching of objects reaches the configured threshold. Should I store the objects in a indexed file (like provided by JCS) and read them from the file (file IO) when required or have the object stored in a distributed cache (network, serialization, deserialization) We are using Solaris as OS. ============================ Adding some more information. I have this question so as to determine if I can switch to distributed caching. The remote server which will have cache will have more memory and better disk and this remote server will only be used for caching. One of the problems we cannot increase the locally cached objects is , it stores the cached objects in JVM heap which has limited memory(using 32bit JVM). ======================================================================== Thanks, we finally ended up choosing Coherence as our Cache product. This provides many cache configuration topologies, in process vs remote vs disk ..etc.

    Read the article

  • How do I merge a local branch into TFS

    - by Johnny
    hi, I did a stupid thing and branched my project on my local disk instead of doing it on the TFS. So now I have two projects on my disk: the old one which has TFS bindings and the new, which doesn't. I want to merge those changes back into the TFS project. How would I go about doing that? I can't do Compare because my local branch has no TFS bindings. There should be some way to compare the differences between the two projects locally and then meld the differences into the old project and check-in, but I can't find an easy way of doing that. Any other solutions?

    Read the article

  • NTFS-compressing Virtual PC disks (on host and/or guest)

    - by nlawalker
    I'm hoping someone here can answer these definitively: Does putting a VHD file in an NTFS-compressed folder on the host improve performance of the virtual machine, diminish performance, or neither? What about using NTFS compression within the guest? Does using compresssion on either the host or the guest lead to any problems like read or write errors? If I were to put a VHD in a compressed folder on the host, would I benefit from compacting it? I've seen references to using NTFS compression on quite a few VPC "tips and tricks" blog posts, and it seems like half of them say to never do it and the other half say that not only does it save disk space but it actually can improve performance if you have a fast CPU and your primary performance bottleneck is the disk.

    Read the article

  • Autocomplete server-side implementation

    - by toluju
    What is a fast and efficient way to implement the server-side component for an autocomplete feature in an html input box? I am writing a service to autocomplete user queries in our web interface's main search box, and the completions are displayed in an ajax-powered dropdown. The data we are running queries against is simply a large table of concepts our system knows about, which matches roughly with the set of wikipedia page titles. For this service obviously speed is of utmost importance, as responsiveness of the web page is important to the user experience. The current implementation simply loads all concepts into memory in a sorted set, and performs a simple log(n) lookup on a user keystroke. The tailset is then used to provide additional matches beyond the closest match. The problem with this solution is that it does not scale. It currently is running up against the VM heap space limit (I've set -Xmx2g, which is about the most we can push on our 32 bit machines), and this prevents us from expanding our concept table or adding more functionality. Switching to 64-bit VMs on machines with more memory isn't an immediate option. I've been hesitant to start working on a disk-based solution as I am concerned that disk seek time will kill performance. Are there possible solutions that will let me scale better, either entirely in memory or with some fast disk-backed implementations? Edits: @Gandalf: For our use case it is important the the autocompletion is comprehensive and isn't just extra help for the user. As for what we are completing, it is a list of concept-type pairs. For example, possible entries are [("Microsoft", "Software Company"), ("Jeff Atwood", "Programmer"), ("StackOverflow.com", "Website")]. We are using Lucene for the full search once a user selects an item from the autocomplete list, but I am not yet sure Lucene would work well for the autocomplete itself. @Glen: No databases are being used here. When I'm talking about a table I just mean the structured representation of my data. @Jason Day: My original implementation to this problem was to use a Trie, but the memory bloat with that was actually worse than the sorted set due to needing a large number of object references. I'll read on the ternary search trees to see if it could be of use.

    Read the article

  • use hg to synchronize my project between my two computer

    - by hguser
    Hi: I have two computer : the desktop in my company and the portable computer in my home. Now I want to use the hg to synchronize the project between them using a "USB removable disk". So I wonder how to implement it? THe pro in my desktop is : D:\work\mypro. I use the following command to init it: hg init Then I connect to the USB disk whose volume label is "H",and get a clone using: cd H: hg init hg clone D:\work\mypro mypro-usb ANd in my portable computer I use: cd D: hg clone H:\mypro-usb mypro-home However I do not know how to do if I modify some files(remove or add and modify) in the mypro-home,how to make the mypro-usb changed synchronizely,also I want the mypro in my desktop synchronizely. How to do it?

    Read the article

  • How to serve a View as CSV in ASP.NET Web Forms

    - by ChessWhiz
    Hi, I have a MS SQL view that I want to make available as a CSV download in my ASPNET Web Forms app. I am using Entity Framework for other views and tables in the project. What's the best way to enable this download? I could add a HyperLink whose click handler iterates over the view, writes its CSV form to the disk, and then serves that file. However, I'd prefer not to write to the disk if it can be avoided, and that involves iteration code that may be avoided with some other solution. Any ideas?

    Read the article

  • Red Hat cluster: Failure of one of two services sharing the same virtual IP tears down IP

    - by js.
    I'm creating a 2+1 failover cluster under Red Hat 5.5 with 4 services of which 2 have to run on the same node, sharing the same virtual IP address. One of the services on each node needs a (SAN) disk, the other doesn't. I'm using HA-LVM. When I shut down (via ifdown) the two interfaces connected to the SAN to simulate SAN failure, the service needing the disk is disabled, the other keeps running, as expected. Surprisingly (and unfortunately), the virtual IP address shared by the two services on the same machine is also removed, rendering the still-running service useless. How can I configure the cluster to keep the IP address up?

    Read the article

  • Fully automated SQL Server Restore

    - by hasen j
    I'm not very fluent with SQL Server commands. I need a script to restore a database from a .bak file and move the logical_data and logical_log files to a specific path. I can do: restore filelistonly from disk='D:\backups\my_backup.bak' This will give me a result set with a column LogicalName, next I need to use the logical names from the result set in the restore command: restore database my_db_name from disk='d:\backups\my_backups.bak' with file=1, move 'logical_data_file' to 'd:\data\mydb.mdf', move 'logical_log_file' to 'd:\data\mylog.ldf' How do I capture the logical names from the first result set into variables that can be supplied to the "move" command? I think the solution might be trivial, but I'm pretty new to SQL Server.

    Read the article

  • mdadm: Win7-install created a boot partition on one of my RAID6 drives. How to rebuild?

    - by EXIT_FAILURE
    My problem happened when I attempted to install Windows 7 on it's own SSD. The Linux OS I used which has knowledge of the software RAID system is on a SSD that I disconnected prior to the install. This was so that windows (or I) wouldn't inadvertently mess it up. However, and in retrospect, foolishly, I left the RAID disks connected, thinking that windows wouldn't be so ridiculous as to mess with a HDD that it sees as just unallocated space. Boy was I wrong! After copying over the installation files to the SSD (as expected and desired), it also created an ntfs partition on one of the RAID disks. Both unexpected and totally undesired! . I changed out the SSDs again, and booted up in linux. mdadm didn't seem to have any problem assembling the array as before, but if I tried to mount the array, I got the error message: mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so dmesg: EXT4-fs (md0): ext4_check_descriptors: Block bitmap for group 0 not in group (block 1318081259)! EXT4-fs (md0): group descriptors corrupted! I then used qparted to delete the newly created ntfs partition on /dev/sdd so that it matched the other three /dev/sd{b,c,e}, and requested a resync of my array with echo repair > /sys/block/md0/md/sync_action This took around 4 hours, and upon completion, dmesg reports: md: md0: requested-resync done. A bit brief after a 4-hour task, though I'm unsure as to where other log files exist (I also seem to have messed up my sendmail configuration). In any case: No change reported according to mdadm, everything checks out. mdadm -D /dev/md0 still reports: Version : 1.2 Creation Time : Wed May 23 22:18:45 2012 Raid Level : raid6 Array Size : 3907026848 (3726.03 GiB 4000.80 GB) Used Dev Size : 1953513424 (1863.02 GiB 2000.40 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Mon May 26 12:41:58 2014 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 4K Name : okamilinkun:0 UUID : 0c97ebf3:098864d8:126f44e3:e4337102 Events : 423 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd 3 8 64 3 active sync /dev/sde Trying to mount it still reports: mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so and dmesg: EXT4-fs (md0): ext4_check_descriptors: Block bitmap for group 0 not in group (block 1318081259)! EXT4-fs (md0): group descriptors corrupted! I'm a bit unsure where to proceed from here, and trying stuff "to see if it works" is a bit too risky for me. This is what I suggest I should attempt to do: Tell mdadm that /dev/sdd (the one that windows wrote into) isn't reliable anymore, pretend it is newly re-introduced to the array, and reconstruct its content based on the other three drives. I also could be totally wrong in my assumptions, that the creation of the ntfs partition on /dev/sdd and subsequent deletion has changed something that cannot be fixed this way. My question: Help, what should I do? If I should do what I suggested , how do I do that? From reading documentation, etc, I would think maybe: mdadm --manage /dev/md0 --set-faulty /dev/sdd mdadm --manage /dev/md0 --remove /dev/sdd mdadm --manage /dev/md0 --re-add /dev/sdd However, the documentation examples suggest /dev/sdd1, which seems strange to me, as there is no partition there as far as linux is concerned, just unallocated space. Maybe these commands won't work without. Maybe it makes sense to mirror the partition table of one of the other raid devices that weren't touched, before --re-add. Something like: sfdisk -d /dev/sdb | sfdisk /dev/sdd Bonus question: Why would the Windows 7 installation do something so st...potentially dangerous? Update I went ahead and marked /dev/sdd as faulty, and removed it (not physically) from the array: # mdadm --manage /dev/md0 --set-faulty /dev/sdd # mdadm --manage /dev/md0 --remove /dev/sdd However, attempting to --re-add was disallowed: # mdadm --manage /dev/md0 --re-add /dev/sdd mdadm: --re-add for /dev/sdd to /dev/md0 is not possible --add, was fine. # mdadm --manage /dev/md0 --add /dev/sdd mdadm -D /dev/md0 now reports the state as clean, degraded, recovering, and /dev/sdd as spare rebuilding. /proc/mdstat shows the recovery progress: md0 : active raid6 sdd[4] sdc[1] sde[3] sdb[0] 3907026848 blocks super 1.2 level 6, 4k chunk, algorithm 2 [4/3] [UU_U] [>....................] recovery = 2.1% (42887780/1953513424) finish=348.7min speed=91297K/sec nmon also shows expected output: ¦sdb 0% 87.3 0.0| > |¦ ¦sdc 71% 109.1 0.0|RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR > |¦ ¦sdd 40% 0.0 87.3|WWWWWWWWWWWWWWWWWWWW > |¦ ¦sde 0% 87.3 0.0|> || It looks good so far. Crossing my fingers for another five+ hours :) Update 2 The recovery of /dev/sdd finished, with dmesg output: [44972.599552] md: md0: recovery done. [44972.682811] RAID conf printout: [44972.682815] --- level:6 rd:4 wd:4 [44972.682817] disk 0, o:1, dev:sdb [44972.682819] disk 1, o:1, dev:sdc [44972.682820] disk 2, o:1, dev:sdd [44972.682821] disk 3, o:1, dev:sde Attempting mount /dev/md0 reports: mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so And on dmesg: [44984.159908] EXT4-fs (md0): ext4_check_descriptors: Block bitmap for group 0 not in group (block 1318081259)! [44984.159912] EXT4-fs (md0): group descriptors corrupted! I'm not sure what do do now. Suggestions? Output of dumpe2fs /dev/md0: dumpe2fs 1.42.8 (20-Jun-2013) Filesystem volume name: Atlas Last mounted on: /mnt/atlas Filesystem UUID: e7bfb6a4-c907-4aa0-9b55-9528817bfd70 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 244195328 Block count: 976756712 Reserved block count: 48837835 Free blocks: 92000180 Free inodes: 243414877 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 791 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 RAID stripe width: 2 Flex block group size: 16 Filesystem created: Thu May 24 07:22:41 2012 Last mount time: Sun May 25 23:44:38 2014 Last write time: Sun May 25 23:46:42 2014 Mount count: 341 Maximum mount count: -1 Last checked: Thu May 24 07:22:41 2012 Check interval: 0 (<none>) Lifetime writes: 4357 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: e177a374-0b90-4eaa-b78f-d734aae13051 Journal backup: inode blocks dumpe2fs: Corrupt extent header while reading journal super block

    Read the article

  • Portable and Secure Document Repository

    - by Sivakanesh
    I'm trying to find a document manager/repository (WinXP) that can be used from a USB disk. I would like a tool that will allow you to add all documents into a single repository (or a secure file system). Ideally you would login to this portable application to add or retrieve a document and document shouldn't be accessible outside of the application. I have found an application called Benubird Pro (app is portable) that allows you to add files to a single repository, but downsides are that it is not secure and the repository is always stored on the PC and not on the USB disk. Are you able to recommend any other applications? Thanks

    Read the article

  • Application_End() cannot access cache through HttpContext.Current.Cache[key]

    - by Carl J.
    I want to be able to maintain certain objects between application restarts. To do that, I want to write specific cached items out to disk in Global.asax Application_End() function and re-load them back on Application_Start(). I currently have a cache helper class, which uses the following method to return the cached value: return HttpContext.Current.Cache[key]; Problem: during Application_End(), HttpContext.Current is null since there is no web request (it's an automated cleanup procedure) - therefore, I cannot access .Cache[] to retrieve any of the items to save to disk. Question: how can I access the cache items during Application_End()?

    Read the article

  • Saving contents of ApplicationState in ASP.Net (MVC)

    - by Saqib
    I have an internal app used to edit XML files on disk. The XML files are loaded into an object model which is stored in ApplicationState. I need to save this data. The one option is to do this every time the user changes some data. However, this seems a bit inefficient - writing the data out to disk each time a change is made. Instead, is it possible to be notified whenever the user closes their browser, plus just before the web server exits? Thus, the data would be saved each time a session ends, plus when the computer shuts down, etc. I thought that Application_End(), Application_Error() and Session_End() in Global.asax would provide this, but these methods don't seem to be called.

    Read the article

  • .NET and C# Exceptions. What is it reasonable to catch.

    - by djna
    Disclaimer, I'm from a Java background. I don't do much C#. There's a great deal of transfer between the two worlds, but of course there are differences and one is in the way Exceptions tend to be thought about. I recently answered a C# question suggesting that under some circstances it's reasonable to do this: try { some work } catch (Exeption e) { commonExceptionHandler(); } (The reasons why are immaterial). I got a response that I don't quite understand: until .NET 4.0, it's very bad to catch Exception. It means you catch various low-level fatal errors and so disguise bugs. It also means that in the event of some kind of corruption that triggers such an exception, any open finally blocks on the stack will be executed, so even if the callExceptionReporter fuunction tries to log and quit, it may not even get to that point (the finally blocks may throw again, or cause more corruption, or delete something important from the disk or database). May I'm more confused than I realise, but I don't agree with some of that. Please would other folks comment. I understand that there are many low level Exceptions we don't want to swallow. My commonExceptionHandler() function could reasonably rethrow those. This seems consistent with this answer to a related question. Which does say "Depending on your context it can be acceptable to use catch(...), providing the exception is re-thrown." So I conclude using catch (Exception ) is not always evil, silently swallowing certain exceptions is. The phrase "Until .NET 4 it is very bad to Catch Exception" What changes in .NET 4? IS this a reference to AggregateException, which may give us some new things to do with exceptions we catch, but I don't think changes the fundamental "don't swallow" rule. The next phrase really bothers be. Can this be right? It also means that in the event of some kind of corruption that triggers such an exception, any open finally blocks on the stack will be executed (the finally blocks may throw again, or cause more corruption, or delete something important from the disk or database) My understanding is that if some low level code had lowLevelMethod() { try { lowestLevelMethod(); } finally { some really important stuff } } and in my code I call lowLevel(); try { lowLevel() } catch (Exception e) { exception handling and maybe rethrowing } Whether or not I catch Exception this has no effect whatever on the excution of the finally block. By the time we leave lowLevelMethod() the finally has already run. If the finally is going to do any of the bad things, such as corrupt my disk, then it will do so. My catching the Exception made no difference. If It reaches my Exception block I need to do the right thing, but I can't be the cause of dmis-executing finallys

    Read the article

  • Warning in gdb,while run application in device mode

    - by dragon
    Warning in gdb, while run application in device mode... The warning message is warning: UUID mismatch detected with the loaded library - on disk is: /Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.1.2.sdk/System/Library/PrivateFrameworks/MBX2D.framework/MBX2D =uuid-mismatch-with-loaded-file,file="/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.1.2.sdk/System/Library/PrivateFrameworks/MBX2D.framework/MBX2D" warning: UUID mismatch detected with the loaded library - on disk is: /Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.1.2.sdk/usr/lib/libxml2.2.dylib =uuid-mismatch-with-loaded-file,file="/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.1.2.sdk/usr/lib/libxml2.2.dylib" The application not loaded in ipod ? the blackscreen shown for long time ... How can i fix this ? can any one help me? Thanks in advance.....

    Read the article

  • Determine cluster size of file system in Python

    - by Philip Fourie
    I would like to calculate the "size on disk" of a file in Python. Therefore I would like to determine the cluster size of the file system where the file is stored. How do I determine the cluster size in Python? Or another built-in method that calculates the "size on disk" will also work. I looked at os.path.getsize but it returns the file size in bytes, not taking the FS's block size into consideration. I am hoping that this can be done in an OS independent way...

    Read the article

  • What's the best way to match a query to a set of keywords?

    - by Ryan Detzel
    Pretty much what you would assume Google does. Advertisers come in and big on keywords, lets say "ipod", "ipod nano", "ipod 60GB", "used ipod", etc. Then we have a query, "I want to buy an ipod nano" or "best place to buy used ipods" what kind of algorithms and systems are used to match those queries to the keyword set. I would imagine that some of those keyword sets are huge, 100k keywords made up of one or more actual words. on top of that queries can be 1-n words as well. Any thoughts, links to wikipedia I can start reading? From what I know already I would use some stemmed hash in disk(CDB?) and a bloom filter to check to see if I should even go to disk.

    Read the article

  • Mac OSX: Passing a link to file from user process to kernel module.

    - by Inso Reiges
    Hello, I need to pass a link to file from a user process to the OSX kernel driver. By link i mean anything that uniquely identifies a file on the local filesystem. I need that link to do I/O on that file in kernel. The most obvious solution seems to pass a file name and use a VFS vnode lookup. However i noticed, that Apple Disk Images helper process passes a raw data array for image-path property to driver when attaching a disk image file: <2f 56 6f 6c 75 6d 65 73 2f 73 74 6f 72 61 67 65 2f 74 65 73 74 32 2e 64 6d 67> What is that diskimages-helper passes to the kernel driver? Some serialized type perhaps? If yes, what type is it and how can i use it?

    Read the article

  • Resolving Assemblies, the fuzzy way

    - by David Rutten
    Here's the setup: A pure DotNET class library is loaded by an unmanaged desktop application. The Class Library acts as a plugin. This plugin loads little baby plugins of its own (all DotNET Class Libraries), and it does so by reading the dll into memory as a byte-stream, then Assembly asm = Assembly.Load(COFF_Image); The problem arises when those little baby plugins have references to other dlls. Since they are loaded via the memory rather than directly from the disk, the framework often cannot find these referenced assemblies and is thus incapable of loading them. I can add an AssemblyResolver handler to my project and I can see these referenced assemblies drop past. I have a reasonably good idea about where to find these referenced assemblies on the disk, but how can I make sure that the Assmebly I load is the correct one? In short, how do I reliably go from the System.ResolveEventArgs.Name field to a dll file path, presuming I have a list of all the folders where this dll could be hiding)?

    Read the article

  • Recommended ASP.NET Shared Hosting

    - by coffeeaddict
    Ok, I have to admit I'm getting fed up with www.discountasp.net's pricing model and this annoyance has built up over the past 8 years or so. I've been with them for years and absolutely love them on the technical side, however it's getting ridiculously expensive for so little that you get. I mean here's my scenario: 1) I am running 2 SQL Server databases which costs me $10/ea per month so that's $20/month for 2 and I only get 500 mb disk space which is horrible 2) I am paying $10/mo just for the hosting itself which I only get 1 gig of disk space! I mean common! 3) I am simply running 2 small apps (Screwturn Wiki & Subtext Blog)...so I don't really care if it's up 99% or not, it's not worth paying a total of $300 just to keep these 2 apps running over discountasp.net Anyone else feel the same? Yes, I know they have great support, probably have great servers running behind this but in the end I really don't care as long as my site is up 95% or better. Yes, the hosting toolset rocks. But you know I bet you I can find a similar set somewhere else. I like how I can totally control IIS 7 at discountasp and I can control my own app pool etc. That's very powerful and essential. But anyone have any good alternatives to discountasp that gives me close to the same at a much more reasonable cost point? I mean http://www.m6.net/prices.aspx gives you 10 SQL Databases for $7 and 200 gigs disk space! I don't know about their tools or support but just looking at those numbers and some other hosts I've seen, I feel that discountasp.net is way out of line. They don't even offer any purchasing discounts such as it would be nice if my 2nd SQL Server is only $5/month not $10...stuff like this, to make it much more realistic and fair. Opinions (people who do have discountasp.net, people who have left them, or people who have another host they like)??? But geez $300 just to host a couple DBs and lightweight open source apps? Not worth the price they are charging. I'm almost at a price point that enables me to get a decent dedicated server! I really don't care about beta support. Not a big deal to me.

    Read the article

  • installing Delphi5 pro in windows 64b

    - by Larry
    Please dont laugh . Over the past 15 years or so I've written all the software that runs my medical practice in D5. Last week when I went to DelphiArea to update a component I got attacked and my disk became unbootable/unrecoverable. I have my original D5p disk and all the components backed up but I want to migrate to W7. I don't care if my delphi apps look like vista/7, I just want to be able to install it and code on the machine again for maintenance purposes. 1) are there any tricks to install D5 so it works in W7? 2) is using a vm program really the only/best way? if so, which is suggested. Thanks in advance. My new Gateway zx6800-03 arrives tomorrow! Larry [email protected]

    Read the article

  • Mac OSX: Passing a file from user process to kernel module.

    - by Inso Reiges
    Hello, I need to pass a link to file from a user process to the OSX kernel driver. By link i mean anything that uniquely identifies a file on the local filesystem. I need that link to do I/O on that file in kernel. The most obvious solution seems to pass a file name and use a VFS vnode lookup. However i noticed, that Apple Disk Images helper process passes a raw data array for image-path property to driver when attaching a disk image file: <2f 56 6f 6c 75 6d 65 73 2f 73 74 6f 72 61 67 65 2f 74 65 73 74 32 2e 64 6d 67> What is that diskimages-helper passes to the kernel driver? Some serialized type perhaps? If yes, what type is it and how can i use it?

    Read the article

  • How to view existing data in Core Data?

    - by mshsayem
    Well, may be this question is silly, but I couldn't find a way (except programmatically). I built a project (for iPhone OS 3.0) which uses Core Data. The xcdatamodel file shows the schema description, but I want to see the data in tabular form (like the management studio for mssql server or phpmyadmin for mysql). Is there any way (except coding)? What is that? Also, which file (in disk/device) those data are stored into? [ I built the tutorial (from apple) on Core Data, named Locations. They used this line somewhere in the code: NSURL *storeUrl = [NSURL fileURLWithPath: [[self applicationDocumentsDirectory] stringByAppendingPathComponent: @"Locations.sqlite"]]; But, I did not see any "xxxxx.sqlite" file in project location (nor in the disk).]

    Read the article

< Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >