Search Results

Search found 11618 results on 465 pages for 'shared storage'.

Page 78/465 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • java threads don't see shared boolean changes

    - by andymur
    Here the code class Aux implements Runnable { private Boolean isOn = false; private String statusMessage; private final Object lock; public Aux(String message, Object lock) { this.lock = lock; this.statusMessage = message; } @Override public void run() { for (;;) { synchronized (lock) { if (isOn && "left".equals(this.statusMessage)) { isOn = false; System.out.println(statusMessage); } else if (!isOn && "right".equals(this.statusMessage)) { isOn = true; System.out.println(statusMessage); } if ("left".equals(this.statusMessage)) { System.out.println("left " + isOn); } } } } } public class Question { public static void main(String [] args) { Object lock = new Object(); new Thread(new Aux("left", lock)).start(); new Thread(new Aux("right", lock)).start(); } } In this code I expect to see: left, right, left right and so on, but when Thread with "left" message changes isOn to false, Thread with "right" message don't see it and I get ("right true" and "left false" console messages), left thread don't get isOn in true, but right Thread can't change it cause it always see old isOn value (true). When i add volatile modifier to isOn nothing changes, but if I change isOn to some class with boolean field and change this field then threads are see changes and it works fine Thanks in advance.

    Read the article

  • Physical storage of data in Access 2007

    - by ste
    I've been trying to estimate the size of an Access table with a certain number of records. It has 4 Longs (4 bytes each), and a Currency (8 bytes). In theory: 1 Record = 24 bytes, 500,000 = ~11.5MB However, the accdb file (even after compacting) increases by almost 30MB (~61 bytes per record). A few extra bytes for padding wouldn't be so bad, but 2.5X seems a bit excessive - even for Microsoft bloat. What's with the discrepancy? The four longs are compound keys, would that matter?

    Read the article

  • Using AWS S3 for photo storage

    - by Sam
    I'm going to be using S3 to store user uploaded photos. Obviously, I wont be serving the image files to user agents without resizing them down. However, not one size would do, as some thumbnails will be smaller than other larger previews. So, I was thinking of making a standard set of dimensions scaling from the lowest 16x16 to some highest 1024x1024. Is this a good way to solve this problem? What if I need a new size later on? How would you solve this?

    Read the article

  • Objects with inheritance memory storage

    - by nikitas350
    Say that i have some classes like this example. class A { int k, m; public: A(int a, int b) { k = a; m = b; } }; class B { int k, m; public: B() { k = 2; m = 3; } }; class C : private A, private B { int k, m; public: C(int a, int b) : A(a, b) { k = b; m = a; } }; Now, in a class C object, are the variables stored in a specific way? I know what happens in a POD object, but this is not a POD object...

    Read the article

  • Quickest way to compute the number of shared elements between two vectors

    - by shn
    Suppose I have two vectors of the same size vector< pair<float, NodeDataID> > v1, v2; I want to compute how many elements from both v1 and v2 have the same NodeDataID. For example if v1 = {<3.7, 22>, <2.22, 64>, <1.9, 29>, <0.8, 7>}, and v2 = {<1.66, 7>, <0.03, 9>, <5.65, 64>, <4.9, 11>}, then I want to return 2 because there are two elements from v1 and v2 that share the same NodeDataIDs: 7 and 64. What is the quickest way to do that in C++ ? Just for information, note that the type NodeDataIDs is defined as I use boost as: typedef adjacency_list<setS, setS, undirectedS, NodeData, EdgeData> myGraph; typedef myGraph::vertex_descriptor NodeDataID; But it is not important since we can compare two NodeDataID using the operator == (that is, possible to do v1[i].second == v2[j].second)

    Read the article

  • C - Error with read() of a file, storage in an array, and printing output properly

    - by ns1
    I am new to C, so I am not exactly sure where my error is. However, I do know that the great portion of the issue lies either in how I am storing the doubles in the d_buffer (double) array or the way I am printing it. Specifically, my output keeps printing extremely large numbers (with around 10-12 digits before the decimal point and a trail of zeros after it. Additionally, this is an adaptation of an older program to allow for double inputs, so I only really added the two if statements (in the "read" for loop and the "printf" for loop) and the d_buffer declaration. I would appreciate any input whatsoever as I have spent several hours on this error. #include <stdio.h> #include <fcntl.h> #include <sys/types.h> #include <unistd.h> #include <string.h> struct DataDescription { char fieldname[30]; char fieldtype; int fieldsize; }; /* ----------------------------------------------- eof(fd): returns 1 if file `fd' is out of data ----------------------------------------------- */ int eof(int fd) { char c; if ( read(fd, &c, 1) != 1 ) return(1); else { lseek(fd, -1, SEEK_CUR); return(0); } } void main() { FILE *fp; /* Used to access meta data */ int fd; /* Used to access user data */ /* ---------------------------------------------------------------- Variables to hold the description of the data - max 10 fields ---------------------------------------------------------------- */ struct DataDescription DataDes[10]; /* Holds data descriptions for upto 10 fields */ int n_fields; /* Actual # fields */ /* ------------------------------------------------------ Variables to hold the data - max 10 fields.... ------------------------------------------------------ */ char c_buffer[10][100]; /* For character data */ int i_buffer[10]; /* For integer data */ double d_buffer[10]; int i, j; int found; printf("Program for searching a mini database:\n"); /* ============================= Read in meta information ============================= */ fp = fopen("db-description", "r"); n_fields = 0; while ( fscanf(fp, "%s %c %d", DataDes[n_fields].fieldname, &DataDes[n_fields].fieldtype, &DataDes[n_fields].fieldsize) > 0 ) n_fields++; /* --- Prints meta information --- */ printf("\nThe database consists of these fields:\n"); for (i = 0; i < n_fields; i++) printf("Index %d: Fieldname `%s',\ttype = %c,\tsize = %d\n", i, DataDes[i].fieldname, DataDes[i].fieldtype, DataDes[i].fieldsize); printf("\n\n"); /* --- Open database file --- */ fd = open("db-data", O_RDONLY); /* --- Print content of the database file --- */ printf("\nThe database content is:\n"); while ( ! eof(fd) ) { /* ------------------ Read next record ------------------ */ for (j = 0; j < n_fields; j++) { if ( DataDes[j].fieldtype == 'I' ) read(fd, &i_buffer[j], DataDes[j].fieldsize); if ( DataDes[j].fieldtype == 'F' ) read(fd, &d_buffer[j], DataDes[j].fieldsize); if ( DataDes[j].fieldtype == 'C' ) read(fd, &c_buffer[j], DataDes[j].fieldsize); } double d; /* ------------------ Print it... ------------------ */ for (j = 0; j < n_fields; j++) { if ( DataDes[j].fieldtype == 'I' ) printf("%d ", i_buffer[j]); if ( DataDes[j].fieldtype == 'F' ) d = d_buffer[j]; printf("%lf ", d); if ( DataDes[j].fieldtype == 'C' ) printf("%s ", c_buffer[j]); } printf("\n"); } printf("\n"); printf("\n"); } Post edits output: 16777216 0.000000 107245694331284094976.000000 107245694331284094976.000000 Pi 33554432 107245694331284094976.000000 2954938175610156848888276006519501238173891974277081114627768841840801736306392481516295906896346039950625609765296207682724801406770458881439696544971142710292689518104183685723154223544599940711614138798312668264956190761622328617992192.000000 2954938175610156848888276006519501238173891974277081114627768841840801736306392481516295906896346039950625609765296207682724801406770458881439696544971142710292689518104183685723154223544599940711614138798312668264956190761622328617992192.000000 Secret Key 50331648 2954938175610156848888276006519501238173891974277081114627768841840801736306392481516295906896346039950625609765296207682724801406770458881439696544971142710292689518104183685723154223544599940711614138798312668264956190761622328617992192.000000 -0.000000 -0.000000 The number E Expected Output: 3 rows of data ending with the number "e = 2.18281828" To reproduce the problem, the following two files need to be in the same directory as the lookup-data.c file: - db-data - db-description

    Read the article

  • iOS AS3 AIR local storage

    - by Kere Puki
    I am developing an app which requires a SQL DB on the device. I am using the File.applicationStorageDirectory and folder.resolvePath to add a new DB. When debugging the app it all looks like it executes correctly and I am able to successfully create a new table. I haven't gone too far with inserting and reading records however I just wanted to ask, when I re-run the app does the existing DB file get replaced with a new empty one? If so do I need to check if the file exists etc. How can I look at the DB on the device (iOS at this stage)? Thanks

    Read the article

  • Java reference storage question

    - by aab
    In java, when you pass an object to a method as a parameter, it is actually passing a reference, or a pointer, to that object because objects in Java are references. Inside the function, it has a pointer to that object which is a location in memory. I am wondering where this pointer lives in memory? Is a new memory location created once inside the function to hold this reference?

    Read the article

  • /etc/resolv.conf nameserver fd00::1

    - by user88631
    My /etc/resolv.conf constantly get a mysterious entry, i run a home network with ipv6 provided by ravd, the interface is auto-configured by Network manager (all name server lookups are lost when this line is first in my /etc/resolv.conf) . Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) **# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN** nameserver fd00::1 nameserver 192.168.1.1 search home.int When ping is working cat /etc/resolv.conf # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver 192.168.1.1 search home.int So something is putting fd00::1 at start of file, not if I ping6 fd00::1 I get Destination unreachable: Administratively prohibited To diagnose this I ran the router with single cable to connected to ubuntu machine. Ran tcpdump + restarted network on ubuntu. "tcpdump ip6 -e -i eth0 | grep fd00" finds nothing, it's not being advertised via the network.. The only hit I got was when an upstream router refused a connection attempt from the ubuntu machine to fd00::1. I have also switched on debug for network manager & it appears to set the mystery line.. 15:22:14 storage-pc NetworkManager[349]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Commit) complete. 15:22:14 storage-pc NetworkManager[349]: <warn> dnsmasq exited with error: Other problem (5) 15:22:14 storage-pc NetworkManager[349]: <debug> [1346822534.281528] [nm-dns-manager.c:598] update_dns(): updating resolv.conf 15:22:14 storage-pc NetworkManager[349]: <debug> [1346822534.281875] [nm-dns-manager.c:719] update_dns(): DNS: plugin dnsmasq ignored (caching disabled) 15:22:14 storage-pc NetworkManager[349]: <info> ((null)): writing resolv.conf to /sbin/resolvconf 15:22:14 storage-pc dbus[2184]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' 15:22:14 storage-pc dnsmasq[2875]: reading /etc/resolv.conf 15:22:14 storage-pc dnsmasq[2875]: using nameserver 192.168.1.1#53 15:22:14 storage-pc dnsmasq[2875]: using nameserver fd00::1#53 Any suggestions on how to find out where this comes from?

    Read the article

  • Intellisense fails for boost::shared_ptr with Boost 1.40.0 in Visual Studio 2008

    - by Edward Loper
    I'm having trouble getting intellisense to auto-complete shared pointers for boost 1.40.0. (It works fine for Boost 1.33.1.) Here's a simple sample project file where auto-complete does not work: #include <boost/shared_ptr.hpp> struct foo { bool func() { return true; }; }; void bar() { boost::shared_ptr<foo> pfoo; pfoo.get(); // <-- intellisense does not autocomplete after "pfoo." pfoo->func(); // <-- intellisense does not autocomplete after "pfoo->" } When I right-click on shared_ptr, and do "Go to Definition," it brings be to a forward-declaration of the shared_ptr class in . It does not bring me to the actual definition, which is in However, it compiles fine, and auto-completion works fine for "boost::." Also, auto-completion works fine for boost::scoped_ptr and for boost::shared_array. Any ideas?

    Read the article

  • Determine target architecture of binary file in Linux (library or executable)

    - by Fernando Miguélez
    We have an issue related to a Java application running under a (rather old) FC3 on a Advantech POS board with a Via C3 processor. The java application has several compiled shared libs that are accessed via JNI. Via C3 processor is suppossed to be i686 compatible. Some time ago after installing Ubuntu 6.10 on a MiniItx board with the same processor I found out that the previous statement is not 100% true. The Ubuntu kernel hanged on startup due to the lack of some specific and optional instructions of the i686 set in the C3 processor. These instructions missing in C3 implementation of i686 set are used by default by GCC compiler when using i686 optimizations. The solution in this case was to go with a i386 compiled version of Ubuntu distribution. The base problem with the Java application is that the FC3 distribution was installed on the HD by cloning from an image of the HD of another PC, this time an Intel P4. Afterwards the distribution needed some hacking to have it running such as replacing some packages (such as the kernel one) with the i383 compiled version. The problem is that after working for a while the system completely hangs without a trace. I am afraid that some i686 code is left somewhere in the system and could be executed randomly at any time (for example after recovering from suspend mode or something like that). My question is: Is there any tool or way to find out at what specific architecture is an binary file (executable or library) aimed provided that "file" does not give so much information?

    Read the article

  • Boost shared_ptr use_count function

    - by photo_tom
    My application problem is the following - I have a large structure foo. Because these are large and for memory management reasons, we do not wish to delete them when processing on the data is complete. We are storing them in std::vector<boost::shared_ptr<foo>>. My question is related to knowing when all processing is complete. First decision is that we do not want any of the other application code to mark a complete flag in the structure because there are multiple execution paths in the program and we cannot predict which one is the last. So in our implementation, once processing is complete, we delete all copies of boost::shared_ptr<foo>> except for the one in the vector. This will drop the reference counter in the shared_ptr to 1. Is it practical to use shared_ptr.use_count() to see if it is equal to 1 to know when all other parts of my app are done with the data. One additional reason I'm asking the question is that the boost documentation on the shared pointer shared_ptr recommends not using "use_count" for production code.

    Read the article

  • Can’t use Sub Reports and Shared Datasets in BIDS - Data Retrieval failed for the subreport '###', located at ### Please check the log files

    - by simonsabin
    SQL Server 2008 R2 brings us a great new feature of shared data sets. This allows datasets used by parameters and multiple reports to be defined in one place and used in multiple reports. I’ve been developing a report for SQLBits that required use of sub reports. They all use a shared data set for the agenda items. Initially I was developing in report builder but have now gone back to BIDS. In doing this however all the reports bust reporting the following error Data Retrieval failed for the subreport...(read more)

    Read the article

  • How would you advocate not using a shared spreadsheet to track bugs / issues ?

    - by Sylvain Defresne
    In our company, the developers want to use a proper bug tracking tool to manager issues in our application. The management however insists on using a shared spreadsheet (formeerly a shared excel file, now a spreadsheet on a web base solution allowing concurrent access). Their argument is that the spreadsheet allow them to have a more highlevel view of the state of the project as they can see how many bugs are open with a quick glance. This also allow them to see who is working on each bug, and get estimation of the time required to close them all (as developer are required to fill time estimation of the bug they are working on). As you can understand, this is not really practical to use for the developers (bug tracking software were invented for a reason). So how can I advocate bug tracking software to ease the work of the developer ? As a bonus, which software would you recommend that would allow the management to be able to get their feedbacks (number of bugs opens, who is working on them, time estimation) with a high level view ?

    Read the article

  • When to use shared libraries for a web framework?

    - by CamelBlues
    tl;dr: I've found myself hosting a bunch of sites running on the same web framework (symfony 1.4). Would it be helpful if I moved all of the shared library code into the same directory and shared it across the sites? more I see some advantages to this: Each site takes up less disk space Library updates (an unlikely scenario) can take place across all sites I also see some disadvantages, mostly in terms of a single point of failure and the inability to have sites using different versions of the framework. My real concern, though, is performance. I hypothesize that I will see a performance increase, since the PHP code will already be cached for all sites when they call the framework. Is this a correct hypothesis?

    Read the article

  • How to auto-scan any plugged in usb storage device with clamav?

    - by ossi
    I'd like to do an automatic virus scan on any plugged in usb device using ClamAV. I'm using Ubuntu 12.04. The closest thing I found was: Run clamav on mount of flashdrive How to run a shell script when a new USB storage device is detected? The first one is not working for me and the second one seems to target a known device. Is there a tutorial around I've missed? Or can I get some help with udev rules that apply to any usb storage device added? Currently nothing seems to do anything.

    Read the article

  • Need to set up shared storage for Guest virtual machines that are running on a Xen host

    - by Sajith
    My environment: I am doing these things at home with the purpose of learning about virtualization techniques. My machine have quad core processor that supports Intel-VT and 8GB RAM. XEN is the virtualization platform. In short, all domUs are LVM based. Mainly I have two questions; I need to have shared storage for these VMs. Something like NFS / NAS / iSCSI etc. However, I don't know which one is the best solution. Therefore, can someone tell me which suits best? Please note that, this shared storage need to be accessed by the other physical machines in the network. How to implement the selected solution for question #1? Any tutorials / guidelines / ebooks will be a great help and highly appreciated. Thank you in advance :)

    Read the article

  • How to automatically mount a Windows shared folder on every boot up?

    - by Zabba
    I am able to access Windows' shared folder from Ubuntu 10.10 Nautilus like so: Type into the Location Bar : smb://box/projects Now, I can see the folder in Nautilus, create/read files in it. Also, on desktop I get a folder called "projects on box". But, that folder on the desktop goes away when I reboot. So, I thought that I can automount the Windows' shared projects folder by adding this to my fstab: //box/Projects /home/base/Projects smbfs rw,user,username=jack,password=www222,fmask=666,dmask=777 0 0 (base is my user name on Ubuntu) Now, I get a folder called "Projects" in my home folder after boot up, but it is empty (cannot see the same files that I can see in Nautilus). What's am I doing wrong? Some more detail: This is what I see of the Projects folder when I do ls -l in my home folder: ... drwxr-xr-x 2 root root 4096 2011-01-01 10:22 Projects drwxr-xr-x 2 base base 4096 2011-01-01 09:06 Public ... Note the two "roots". Is that somehow the problem?

    Read the article

  • T-SQL Snack: How Much Free Storage Space is Available?

    - by andyleonard
    Introduction Ever have a need to calculate the total available storage space for a server? Recently I did. Here's a solution I came up with - I bet someone can do this better! xp_fixeddrives There's a handy stored procedure called xp_fixeddrives that reports the available storage space: exec xp_fixeddrives This returns: drive MB free ----- ----------- C 6998 E 201066 Problem solved right? Maybe. The Sum What I really want is the sum total of all available space presented to the server. I built this...(read more)

    Read the article

  • Google renforce la sécurité de ses services hébergés avec l'authentification basée sur les certificats pour Cloud Storage, Prediction API

    Google renforce la sécurité de ses services hébergés avec l'authentification basée sur les certificats pour Cloud Storage, Prediction API, URL Shortener Google a apporté une mise à jour à ses services Cloud pour les développeurs en renforçant la sécurité de ceux-ci. Les services hébergés de la firme pourront désormais communiquer avec des applications qui utilisent des comptes de service basés sur les certificatifs pour l'authentification. Ainsi, la requête d'une application Web au service Google Cloud Storage pourra par exemple être authentifiée par un certificat ou lieu d'une clé partagée. Les certificats de sécurité offrent une méthode d'authentification plus renforc...

    Read the article

  • How to write the path to a Windows shared folders address in terminal?

    - by user206996
    I want to permanently mount a few Windows shared folders. I googled that topic and found a lot of good answers, but with all the solutions I have a little problem: I believe I can't properly give an address of shared folder. I can find it following way: "Network" - "Windows Network" - "WORKGROUP_name" - "Computer_name" - and there is some folder i would like to mount. How can I write the path to the folders in terminal? UPD i can access needed folder throw address in location bar smb://server/ro/, but i try mount it by editing /etc/fstab file (using guide https://wiki.ubuntu.com/MountWindowsSharesPermanently) i get error mount error: could not resolve address for server: Unknown error PS in /etc/fstab : //SERVER/RO /media/server cifs username=user,password=pwd, uid=1000,iocharset=utf8 0 0

    Read the article

  • Microsoft revient sur les nouveautés de Windows Phone 8 et les implications du "Shared Windows Core" pour les développeurs

    Microsoft revient sur les nouveautés de Windows Phone 8 et les implications du "Shared Windows Core" pour les développeurs Windows Phone 8, la prochaine mise à jour majeure de la plateforme mobile de Microsoft a été présentée il y a quelques semaines, lors de l'événement Windows Phone Summit de San Francisco. En rupture « presque complète » avec la version précédente, cette mouture introduit un nombre assez important de nouvelles fonctionnalités et améliorations. Parmi ces nouveautés, les plus remarquables pour les développeurs sont sans aucun doute le support du code natif (C/C++), et le « Shared Windows Core ». [IMG]http://ftp-developpez.com/gordon-fowle...

    Read the article

  • Error mounting CloudDrive snapshot in Azure

    - by Dave
    Hi, I've been running a cloud drive snapshot in dev for a while now with no probs. I'm now trying to get this working in Azure. I can't for the life of me get it to work. This is my latest error: Microsoft.WindowsAzure.Storage.CloudDriveException: Unknown Error HRESULT=D000000D ---> Microsoft.Window.CloudDrive.Interop.InteropCloudDriveException: Exception of type 'Microsoft.WindowsAzure.CloudDrive.Interop.InteropCloudDriveException' was thrown. at ThrowIfFailed(UInt32 hr) at Microsoft.WindowsAzure.CloudDrive.Interop.InteropCloudDrive.Mount(String url, SignatureCallBack sign, String mount, Int32 cacheSize, UInt32 flags) at Microsoft.WindowsAzure.StorageClient.CloudDrive.Mount(Int32 cacheSize, DriveMountOptions options) Any idea what is causing this? I'm running both the WorkerRole and Storage in Azure so it's nothing to do with the dev simulation environment disconnect. This is my code to mount the snapshot: CloudDrive.InitializeCache(localPath.TrimEnd('\\'), size); var container = _blobStorage.GetContainerReference(containerName); var blob = container.GetPageBlobReference(driveName); CloudDrive cloudDrive = _cloudStorageAccount.CreateCloudDrive(blob.Uri.AbsoluteUri); string snapshotUri; try { snapshotUri = cloudDrive.Snapshot().AbsoluteUri; Log.Info("CloudDrive Snapshot = '{0}'", snapshotUri); } catch (Exception ex) { throw new InvalidCloudDriveException(string.Format( "An exception has been thrown trying to create the CloudDrive '{0}'. This may be because it doesn't exist.", cloudDrive.Uri.AbsoluteUri), ex); } cloudDrive = _cloudStorageAccount.CreateCloudDrive(snapshotUri); Log.Info("CloudDrive created: {0}", snapshotUri, cloudDrive); string driveLetter = cloudDrive.Mount(size, DriveMountOptions.None); The .Mount() method at the end is what's now failing. Please help as this has me royally stumped! Thanks in advance. Dave

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >