Search Results

Search found 5933 results on 238 pages for 'mass storage'.

Page 216/238 | < Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >

  • How to access webbrowser object on this code? C++

    - by extintor
    I found this example http://www.mvps.org/user32/webhost.cab that host an Internet Explorer WebBrowser object, and it uses this code to access the object void webhostwnd::CreateEmbeddedWebControl(void) { OleCreate(CLSID_WebBrowser,IID_IOleObject,OLERENDER_DRAW,0,&site,&storage,(void**)&mpWebObject); mpWebObject->SetHostNames(L"Web Host",L"Web View"); // I have no idea why this is necessary. remark it out and everything works perfectly. OleSetContainedObject(mpWebObject,TRUE); RECT rect; GetClientRect(hwnd,&rect); mpWebObject->DoVerb(OLEIVERB_SHOW,NULL,&site,-1,hwnd,&rect); IWebBrowser2* iBrowser; mpWebObject->QueryInterface(IID_IWebBrowser2,(void**)&iBrowser); VARIANT vURL; vURL.vt = VT_BSTR; vURL.bstrVal = SysAllocString(L"http://google.com"); VARIANT ve1, ve2, ve3, ve4; ve1.vt = VT_EMPTY; ve2.vt = VT_EMPTY; ve3.vt = VT_EMPTY; ve4.vt = VT_EMPTY; iBrowser->put_Left(0); iBrowser->put_Top(0); iBrowser->put_Width(rect.right); iBrowser->put_Height(rect.bottom); iBrowser->Navigate2(&vURL, &ve1, &ve2, &ve3, &ve4); VariantClear(&vURL); iBrowser->Release(); } I don't have much experience with cpp, I want to know how to access that same ie object (to use Navigate2 for example) from a button or something like that. How could I achieve this?

    Read the article

  • Most cost effective way to target multiple mobile platforms

    - by niidto
    Hi I have been given the tasks of speccing a mobile application, which will need to run on approx. 1000 devices. These devices already exist, and consist of iPhones, BlackBerrys, Androids, Windows Mobile and Netbooks. The application will have simple reporting capability, and a collection of forms. Anyway, the obvious solution would be to develop some browser based solution, although given the occasionally connected nature of the devices, there's a potential for data to get lost / not saved. So instead of creating a complex application for each platform, I was thinking we could build what is effectively a form generator, with basic offline storage capability (text files), designed to run on each device, and have the device generate a form, based on for example an XML file that it could request from a server somewhere, resulting in minimal specialist development costs, and the ability to run most of the logic from the server end, with the devices being dumb clients that render forms and upload the data when there is an available connection. Anyway, my question summarised is, how have you made the decision on supporting multiple devices for your application. Is this always an unavoidable problem, and you just have to make the call to support 1 or 2, or pay for developers to write code for each platform, or alternatively supply pre-installed devices to the company? Many thanks James

    Read the article

  • MSBuild / PowerShell: Copy SQL Server 2012 database to SQL Azure via BACPAC (for Continuous Integration)

    - by giveme5minutes
    I'm creating a continuous integration MSBuild script which copies a database in on-premise SQL Server 2012 to SQL Azure. Easy right? Methods After a fair bit of research I've come across the following methods: Use PowerShell to access the DAC library directly, then use the MSBuild PowerShell extension to wrap the script. This would require installing PowerShell 3 and working out how to make the MSBuild PowerShell extension work with it, as apparently MS moved the DAC API to a different namespace in the latest version of the library. PowerShell would give direct access to the API, but may require quite a bit of boilerplate. Use the sample DAC Framework Client Side Tools, which requires compiling them myself, as the downloads available from Codeplex only include the Hosted version. It would also require fixing them to use DAC 3.0 classes as they appear to currently use an earlier version of DAC. I could then call these tools from an <Exec Command="" /> in the MSBuild script. Less boilerplate and if I hit any bumps in the road I can just make changes to the source. Processes Using whichever method, the process could be either: Export from on-premise SQL Server 2012 to local BACPAC Upload BACPAC to blog storage Import BACPAC to SQL Azure via Hosted DAC Or: Export from on-premise SQL Server 2012 to local BACPAC Import BACPAC to SQL Azure via Client DAC Question All of the above seems to be quite a lot of effort for something that seems to be a standard feature... so before I start reinventing the wheel and documenting the results for all to see, is there something really obvious that I've missed here? Is there pre-written script that MS has released that I have not yet uncovered? There's an command in the GUI of SQL Server Management Studio 2012 that does EXACTLY what I'm trying to do (right click on local database, click "Tasks", click "Deploy Database to SQL Azure"). Surely if it's a few clicks in the GUI it must be a single command on the command line somewhere??

    Read the article

  • Alternative or succesor to GDBM

    - by Anon Guy
    We a have a GDBM key-value database as the backend to a load-balanced web-facing application that is in implemented in C++. The data served by the application has grown very large, so our admins have moved the GDBM files from "local" storage (on the webservers, or very close by) to a large, shared, remote, NFS-mounted filesystem. This has affected performance. Our performance tests (in a test environment) show page load times jumping from hundreds of milliseconds (for local disk) to several seconds (over NFS, local network), and sometimes getting as high as 30 seconds. I believe a large part of the problem is that the application makes lots of random reads from the GDBM files, and that these are slow over NFS, and this will be even worse in production (where the front-end and back-end have even more network hardware between them) and as our database gets even bigger. While this is not a critical application, I would like to improve performance, and have some resources available, including the application developer time and Unix admins. My main constraint is time only have the resources for a few weeks. As I see it, my options are: Improve NFS performance by tuning parameters. My instinct is we wont get much out of this, but I have been wrong before, and I don't really know very much about NFS tuning. Move to a different key-value database, such as memcachedb or Tokyo Cabinet. Replace NFS with some other protocol (iSCSI has been mentioned, but i am not familiar with it). How should I approach this problem?

    Read the article

  • Simplest distributed persistent key/value store that supports primary key range queries

    - by StaxMan
    I am looking for a properly distributed (i.e. not just sharded) and persisted (not bounded by available memory on single node, or cluster of nodes) key/value ("nosql") store that does support range queries by primary key. So far closest such system is Cassandra, which does above. However, it adds support for other features that are not essential for me. So while I like it (and will consider using it of course), I am trying to figure out if there might be other mature projects that implement what I need. Specifically, for me the only aspect of value I need is to access it as a blob. For key, however, I need range queries (as in, access values ordered, limited by start and/or end values). While values can have structures, there is no need to use that structure for anything on server side (can do client-side data binding, flexible value/content types etc). For added bonus, Cassandra style storage (journaled, all sequential writes) seems quite optimal for my use case. To help filter out answers, I have investigated some alternatives within general domain like: Voldemort (key/value, but no ordering) and CouchDB (just sharded, more batch-oriented); and am aware of systems that are not quite distributed while otherwise qualifying (bdb variants, tokyo cabinet itself (not sure if Tyrant might qualify), redis (in-memory store only)).

    Read the article

  • What is the purpose of the s==NULL case for mbrtowc?

    - by R..
    mbrtowc is specified to handle a NULL pointer for the s (multibyte character pointer) argument as follows: If s is a null pointer, the mbrtowc() function shall be equivalent to the call: mbrtowc(NULL, "", 1, ps) In this case, the values of the arguments pwc and n are ignored. As far as I can tell, this usage is largely useless. If ps is not storing any partially-converted character, the call will simply return 0 with no side effects. If ps is storing a partially-converted character, then since '\0' is not valid as the next byte in a multibyte sequence ('\0' can only be a string terminator), the call will return (size_t)-1 with errno==EILSEQ. and leave ps in an undefined state. The intended usage seems to have been to reset the state variable, particularly when NULL is passed for ps and the internal state has been used, analogous to mbtowc's behavior with stateful encodings, but this is not specified anywhere as far as I can tell, and it conflicts with the semantics for mbrtowc's storage of partially-converted characters (if mbrtowc were to reset state when encountering a 0 byte after a potentially-valid initial subsequence, it would be unable to detect this dangerous invalid sequence). If mbrtowc were specified to reset the state variable only when s is NULL, but not when it points to a 0 byte, a desirable state-reset behavior would be possible, but such behavior would violate the standard as written. Is this a defect in the standard? As far as I can tell, there is absolutely no way to reset the internal state (used when ps is NULL) once an illegal sequence has been encountered, and thus no correct program can use mbrtowc with ps==NULL.

    Read the article

  • rails2 and aws-simple (simpledb): data cannot be deleted from amazon simpledb?

    - by z3cko
    i am developing a ruby on rails (2.3.8) application with data storage amazon simpledb. i am using the aws-sdb gem in the version aws-sdb (0.3.1) there are a few bugs, but the problems are outlined in the comments of this tutorial from amazon: http://developer.amazonwebservices.com/connect/entry.jspa?externalID=1242 i am wondering if it is a bug of the gem or maybe a proxy issue, but i cannot delete any data from simpledb. anyone else experienced this or has a clue? >> t=Team.find(:first) => #<Team:0x329f718 @prefix_options={}, @attributes={"updated_at"=>Fri May 28 16:33:17 UTC 2010, "id"=>0}> >> t.destroy => #<Net::HTTPOK 200 OK readbody=true> >> t=Team.find(:first) => #<Team:0x321ad38 @prefix_options={}, @attributes={"updated_at"=>Fri May 28 16:33:17 UTC 2010, "id"=>0}> the team model is a normal ActiveResource Model, according to said tutorial. class Team < ActiveResource::Base self.site = "http://localhost:8888" # Proxy host + port self.prefix = "/fb2010_dev/" # SDB domain end

    Read the article

  • Pre-populate iPhone Safari SQLite DB

    - by Matt Rogish
    I'm working with a PhoneGap app that uses Safari local storage (SQlite DB) via Javascript: http://developer.apple.com/safari/library/documentation/iPhone/Conceptual/SafariJSDatabaseGuide/UsingtheJavascriptDatabase/UsingtheJavascriptDatabase.html On first load, the app creates the database, tables, and populates the data via a series of INSERT statements. If the user closes the app while this processing is happening, then my app database is left in an inconsistent state. What I prefer to do is deploy the SQLite DB as part of my iTunes App packaging so nothing must be populated at app cold start. However, I'm not sure if that is possible -- all of the google hits for this topic that I can find are referring to the core-data provided SQLite which is not what we're using... If it's not possible, could I wrap the entire thing in a transaction and keep re-trying it when the app is restarted? Failing that, I guess I can create a simple table with one boolean column "is_app_db_loaded?" and set it to true after I've processed all my inserts. But that's really gross... Ideas? Thanks!!

    Read the article

  • Returning c_str from a function

    - by user199421
    This is from a small library that I found online: const char* GetHandStateBrief(const PostFlopState* state) { static std::ostringstream out; ... rest of the function ... return out.str().c_str() Now in my code I am doing this: const char *d = GetHandStateBrief(&post); std::cout<< d << std::endl; Now, at first d contained garbage. I then realized that the c string I am getting from the function is destroyed when the function returns because std::ostringstream is allocated on the stack. So I added: return strdup( out.str().c_str()); And now I can get the text I need from the function. I have two questions: 1) Am I understanding this correctly? 2) I later noticed that the ostringstream was was allocated with static storage. Doesn't that mean that the object is supposed to stay in memory until the program terminates? and if so , then why can't I access the string?

    Read the article

  • .Net lambda expression-- where did this parameter come from?

    - by larryq
    I'm a lambda newbie, so if I'm missing vital information in my description please tell me. I'll keep the example as simple as possible. I'm going over someone else's code and they have one class inheriting from another. Here's the derived class first, along with the lambda expression I'm having trouble understanding: class SampleViewModel : ViewModelBase { private ICustomerStorage storage = ModelFactory<ICustomerStorage>.Create(); public ICustomer CurrentCustomer { get { return (ICustomer)GetValue(CurrentCustomerProperty); } set { SetValue(CurrentCustomerProperty, value); } } private int quantitySaved; public int QuantitySaved { get { return quantitySaved; } set { if (quantitySaved != value) { quantitySaved = value; NotifyPropertyChanged(p => QuantitySaved); //where does 'p' come from? } } } public static readonly DependencyProperty CurrentCustomerProperty; static SampleViewModel() { CurrentCustomerProperty = DependencyProperty.Register("CurrentCustomer", typeof(ICustomer), typeof(SampleViewModel), new UIPropertyMetadata(ModelFactory<ICustomer>.Create())); } //more method definitions follow.. Note the call to NotifyPropertyChanged(p => QuantitySaved) bit above. I don't understand where the "p" is coming from. Here's the base class: public abstract class ViewModelBase : DependencyObject, INotifyPropertyChanged, IXtremeMvvmViewModel { public event PropertyChangedEventHandler PropertyChanged; protected virtual void NotifyPropertyChanged<T>(Expression<Func<ViewModelBase, T>> property) { MvvmHelper.NotifyPropertyChanged(property, PropertyChanged); } } There's a lot in there that's not germane to the question I'm sure, but I wanted to err on the side of inclusiveness. The problem is, I don't understand where the 'p' parameter is coming from, and how the compiler knows to (evidently?) fill in a type value of ViewModelBase from thin air? For fun I changed the code from 'p' to 'this', since SampleViewModel inherits from ViewModelBase, but I was met with a series of compiler errors, the first one of which statedInvalid expression term '=>' This confused me a bit since I thought that would work. Can anyone explain what's happening here?

    Read the article

  • How to handle not-enough-isolatedstorage issue deep in data loader?

    - by Edward Tanguay
    I have a silverlight application which loads data from many external data sources into IsolatedStorage, and while loading any of these sources if it does not have enough IsolatedStorage, it ends up in a catch statement. At that point in that catch statement I would like to ask the user to click a button to approve silverlight to increase the IsolatedStorage capacity. The problem is, although I have a "SwitchPage()" method with which I display a page, if I access it at this point it is too deep in the loading process and the application always goes into an endless loop, hangs and crashes. I need a way to branch out of the application completely somehow to an independent UserControl which has a button and code behind which does the increase logic. What is a solution for an application to be able to branch out of a loading process catch statement like this, display a user control which has a button to ask the user to increase the IsolatedStorage? public static void SaveBitmapImageToIsolatedStorageFile(OpenReadCompletedEventArgs e, string fileName) { try { using (IsolatedStorageFile isf = IsolatedStorageFile.GetUserStoreForApplication()) { using (IsolatedStorageFileStream isfs = new IsolatedStorageFileStream(fileName, FileMode.Create, isf)) { Int64 imgLen = (Int64)e.Result.Length; byte[] b = new byte[imgLen]; e.Result.Read(b, 0, b.Length); isfs.Write(b, 0, b.Length); isfs.Flush(); isfs.Close(); isf.Dispose(); } } } catch (IsolatedStorageException) { //handle: present user with button to increase isolated storage } catch (TargetInvocationException) { //handle: not saved } }

    Read the article

  • Tree data in MySql database table

    - by Robert Koritnik
    I have a table that uses Adjacency list model for hierarchy storage. My most relevant columns in this table are therefore: ItemId // is auto_increment ParentId Level ParentTrail // in the form of "parentId/../parentId/itemId" then I created a before insert tigger, that populates columns Level and ParentTrail. Since the last column also includes current item's ID I had to use a trick in my trigger because auto_increment columns are not available in the before insert trigger. So I get that value from the information_schema.tables table. All works fine, until I try to write an update trigger, that would update my item and its descendants when the item changes its parent (ParentId has changed). But I can't make an update on my table inside the update trigger. All I can do is to change current record's values but not other's. I could use a separate table for hierarchy data, but that would mean that I would also have to create a view that would combine these two tables (1:1 relation) and I would like to avoid this is at all possible. Is there a way to have all these in the same table so that these fields (Level and ParetTrail) set/update themselves automagically using triggers?

    Read the article

  • scanf segfaults and various other anomalies inside while loop

    - by Shadow
    while(1){ //Command prompt char *command; printf("%s>",current_working_directory); scanf("%s",command);<--seg faults after input has been received. printf("\ncommand:%s\n",command); } I am getting a few different errors and they don't really seem reproducible(except for the segfault at this point .<). This code worked fine about 10 minutes ago, then it infinite looped the printf command and now it seg faults on the line mentioned above. The only thing I changed was scanf("%s",command); to what it currently is. If I change the command variable to be an array it works, obviously this is because the storage is set aside for it. 1) I got prosecuted about telling someone that they needed to malloc a pointer* (But that usually seems to solve the problem such as making it an array) 2) the command I am entering is "magic" 5 characters so there shouldn't be any crazy stack overflow. 3) I am running on mac OSX 10.6 with newest version of xCode(non-OS4) and standard gcc 4) this is how I compile the program: gcc --std=c99 -W sfs.c Just trying to figure out what is going on. Being this is for a school project I am never going to have to see again, I will just code some noob work around that would make my boss cry :) But for afterwards I would love to figure out why this is happening and not just make some fix for it, and if there is some fix for it why that fix works.

    Read the article

  • Which MySQL Fork/Version to Pick??

    - by Drew
    As most of you know, Sun acquired MySQL (and later Oracle acquired Sun), and during these acquisitions, there were a lot of FUD in MySQL community which resulted in creation of various forks. Today we have MySQL from MySQL, Percona (XtraDB) MySQL, OurDelta MySQL, MariaDB, Drizzle to name a few. Which brings us to the source of the problem. We are in the process of upgrading our databases (hardware/software) and I would like to know which one of the forks should I go with. Each has their own set of pros/cons. We are currently using MySQL 5.0.x from MySQL/Linux on an 8-core machine. Our new hardware is a monster with 32 cores and 32GB of memory connecting to a fast NetApp Storage via FC. I would like to stick with MySQL from MySQL but I have heard horror stories on how badly MySQL 5.1 performs on many cores. I have also heard that MySQL 5.4 performs better on multi-core machines but that's still not production ready. In addition, I have also heard a lot of good things about Percona builds. This is what I know so far: MySQL 5.1 from MySQL: Reliable choice, but doesn't scale well on a big machine Percona: Scales well, good backing company. I don't have much experience with it MariaDB: Don't know much about it besides that it was founded by Original MySQL developers (including Monty) OurDelta: Don't know much Drizzle: Mostly optimized for cloud computing I would like to know what's the general notion about this problem. Which build/version should I go with? How are you guys picking your builds/versions? Thanks!

    Read the article

  • How do I programatically verify, create, and update SQL table structure?

    - by JYelton
    Scenario: I have an application (C#) that expects a SQL database and login, which are set by a user. Once connected, it checks for the existence of several table and creates them if not found. I'd like to expand on this by having the program be capable of adding columns to those tables if I release a new version of the program which relies upon the new columns. Question: What is the best way to programatically check the structure of an existing SQL table and create or update it to match an expected structure? I am planning to iterate through the list of required columns and alter the existing table whenever it does not contain the new column. I can't help but wonder if there's an approach that is different or better. Criteria: Here are some of my expectations and self-imposed rules: Newer versions of the program might no longer use certain columns, but they would be retained for data logging purposes. In other words, no columns will be removed. Existing data in the table must be preserved, so the table cannot simply be dropped and recreated. In all cases, newly added columns would allow null data, so the population of old records is taken care of by having default null values. Example: Here is a sample table (because visual examples help!): id sensor_name sensor_status x1 x2 x3 x4 1 na019 OK 0.01 0.21 1.41 1.22 Then, in a new version, I may want to add the column x5. The "x-columns" are all data-storage columns that accept null.

    Read the article

  • How to find an embedded platform?

    - by gmagana
    I am new to the locating hardware side of embedded programming and so after being completely overwhelmed with all the choices out there (pc104, custom boards, a zillion option for each board, volume discounts, devel kits, ahhh!!) I am asking here for some direction. Basically, I must find a new motherboard and (most likely) re-implement the program logic. Rewriting this in C/C++/Java/C#/Pascal/BASIC is not a problem for me. so my real problem is finding the hardware. This motherboard will have several other devices attached to it. Here is a summary of what I need to do: Required: 2 RS232 serial ports (one used all the time for primary UI, the second one not continuous) 1 modem (9600+ baud ok) [Modem will be in simultaneous use with only one of the serial port devices, so interrupt sharing with one serial port is OK, but not both] Minimum permanent/long term storage: Whatever O/S requires + 1 MB (executable) + 512 KB (Data files) RAM: Minimal, whatever the O/S requires plus maybe 1MB for executable. Nice to have: USB port(s) Ethernet network port Wireless network Implementation languages (any O/S I will adapt to): First choice Java/C# (Mono ok) Second choice is C/Pascal Third is BASIC Ok, given all this, I am having a lot of trouble finding hardware that will support this that is low in cost. Every manufacturer site I visit has a lot of options, and it's difficult to see if their offering will even satisfy my must-have requirements (for example they sometimes list 3 "serial ports", but it appears that only one of the three is RS232, for example, and don't mention what the other two are). The #1 constraint is cost, #2 is size. Can anyone help me with this? This little task has left me thinking I should have gone for EE and not CS :-). EDIT: A bit of background: This is a system currently in production, but the original programmer passed away, and the current hardware manufacturer cannot find hardware to run the (currently) DOS system, so I need to reimplement this in a modern platform. I can only change the programming and the motherboard hardware.

    Read the article

  • Reverse geocode street name and city as text

    - by Taylor Satula
    Hello, I have been having some trouble finding a good way to output just the street name and city as text (Infinite Loop, Cupertino shown here) that can be displayed in my iPhone app. This needs to be able to dynamically change as you change streets and city. I don't have the slightest idea of how to do this, I hope someone can help. I have attached a very rough image of what I am trying to acheave. I have found this (http://code.google.com/apis/maps/documentation/javascript/services.html#Geocoding) for google maps about how to reverse geocode using javascript, but what I do not understand is how this would be done in a iPhone development setting. I work in web design and I see how it would be done in HTML but I am very new to iPhone development and don't have the slightest clue of how it would be done here. If someone could spell out how to do this I would be extremely grateful. I cannot seem to find what I am looking for by searching Google. Reference picture: http://www.threepixeldrift.com/images/deep-storage/reversegeocodeiphoneapp.jpg

    Read the article

  • Indices instead of pointers in STL containers?

    - by zvrba
    Due to specific requirements [*], I need a singly-linked list implementation that uses integer indices instead of pointers to link nodes. The indices are always interpreted with respect to a vector containing the list nodes. I thought I might achieve this by defining my own allocator, but looking into the gcc's implementation of , they explicitly use pointers for the link fields in the list nodes (i.e., they do not use the pointer type provided by the allocator): struct _List_node_base { _List_node_base* _M_next; ///< Self-explanatory _List_node_base* _M_prev; ///< Self-explanatory ... } (For this purpose, the allocator interface is also deficient in that it does not define a dereference function; "dereferencing" an integer index always needs a pointer to the underlying storage.) Do you know a library of STL-like data structures (i am mostly in need of singly- and doubly-linked list) that use indices (wrt. a base vector) instead of pointers to link nodes? [*] Saving space: the lists will contain many 32-bit integers. With two pointers per node (STL list is doubly-linked), the overhead is 200%, or 400% on 64-bit platform, not counting the overhead of the default allocator.

    Read the article

  • Is DB logging more secure than file logging for my PHP web app?

    - by iama
    I would like to log errors/informational and warning messages from within my web application to a log. I was initially thinking of logging all of these onto a text file. However, my PHP web app will need write access to the log files and the folder housing this log file may also need write access if log file rotation is desired which my web app currently does not have. The alternative is for me to log the messages to the MySQL database since my web app is already using the MySQL database for all its data storage needs. However, this got me thinking that going with the MySQL option is much better than the file option since I already have a configuration file with the database access information protected using file system permissions. If I now go with the log file option I need to tinker the file and folder access permissions and this will only make my application less secure and defeats the whole purpose of logging. Is this correct? I am using XAMPP for development and am a newbie to LAMP. Please let me know your recommendations for logging. Thanks.

    Read the article

  • Working with datetime type in Quickbooks My Time files

    - by jakemcgraw
    I'm attempting to process Quickbooks My Time imt files using PHP. The imt file is a plaintext XML file. I've been able to use the PHP SimpleXML library with no issues but one: The numeric representations of datetime in the My Time XML files is something I've never seen before: <object type="TIMEPERIOD" id="z128"> <attribute name="notes" type="string"></attribute> <attribute name="start" type="date">308073428.00000000000000000000</attribute> <attribute name="running" type="bool">0</attribute> <attribute name="duration" type="double">3600</attribute> <attribute name="datesubmitted" type="date">310526237.59616601467132568359</attribute> <relationship name="activity" type="1/1" destination="ACTIVITY" idrefs="z130"></relationship> </object> You can see that attritube[@name='start'] has a value of: 308073428.00000000000000000000 This is not Excel based method of storage 308,073,428 is too many days since 1900-01-00 and it isn't Unix Epoch either. So, my question is, has anyone ever seen this type of datetime representation?

    Read the article

  • Spring: Bean fails to read off values from external Properties file when using @Value annotation

    - by daydreamer
    XML Configuration <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:util="http://www.springframework.org/schema/util" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util.xsd"> <util:properties id="mongoProperties" location="file:///storage/local.properties" /> <bean id="mongoService" class="com.business.persist.MongoService"></bean> </beans> and MongoService looks like @Service public class MongoService { @Value("#{mongoProperties[host]}") private String host; @Value("#{mongoProperties[port]}") private int port; @Value("#{mongoProperties[database]}") private String database; private Mongo mongo; private static final Logger LOGGER = LoggerFactory.getLogger(MongoService.class); public MongoService() throws UnknownHostException { LOGGER.info("host=" + host + ", port=" + port + ", database=" + database); mongo = new Mongo(host, port); } public void putDocument(@Nonnull final DBObject document) { LOGGER.info("inserting document - " + document.toString()); mongo.getDB(database).getCollection(getCollectionName(document)).insert(document, WriteConcern.SAFE); } I write my MongoServiceTest as public class MongoServiceTest { @Autowired private MongoService mongoService; public MongoServiceTest() throws UnknownHostException { mongoRule = new MongoRule(); } @Test public void testMongoService() { final DBObject document = DBContract.getUniqueQuery("001"); document.put(DBContract.RVARIABLES, "values"); document.put(DBContract.PVARIABLES, "values"); mongoService.putDocument(document); } and I see failures in tests as 12:37:25.224 [main] INFO c.s.business.persist.MongoService - host=null, port=0, database=null java.lang.NullPointerException at com.business.persist.MongoServiceTest.testMongoService(MongoServiceTest.java:40) Which means bean was not able to read the values from local.properties local.properties ### === MongoDB interaction === ### host="127.0.0.1" port=27017 database=contract How do I fix this? update It doesn't seem to read off the values even after creating setters/getters for the fields. I am really clueless now. How can I even debug this issue? Thanks much!

    Read the article

  • Document management, SCM ?

    - by tsunade
    Hello, This might not be a hard core programming question, but it's related to some of the tools used by programmers I suspect. So we're a bunch of people each with a bunch of documents and a bunch of different computers on a bunch of operating systems (well, only 2, linux and windows). The best way these documents can be stored/managed is if they were available offline (the laptop might not always be online) but also synchronized between all the machines. Having a server with extra reliable storage be a "base repository" seems like a good idea to me. Using a SCM comes to my mind and I've tried Subversion, and it seems to be a good thing that it uses a centralized repository - but: When checking out the total size of the checkout is roughly double the original size. Big files or big repositories seem to slow it down. Also I've tried rsync, which might work - but it's a bit rough when it comes to the potential conflict. Finally I've tried Unison (which is a wrapping of rsync, I think) and while it works it becomes horribly slow for the big directories we have here since it has to scan everything. So the question is - is there a SCM tool out there that is actually practial to use for a big bunch of both small and big files? If thats a NO - does anyone know other tools that do this job? Thanks for reading :)

    Read the article

  • Database abstraction/adapters for ruby

    - by Stiivi
    What are the database abstractions/adapters you are using in Ruby? I am mainly interested in data oriented features, not in those with object mapping (like active record or data mapper). I am currently using Sequel. Are there any other options? I am mostly interested in: simple, clean and non-ambiguous API data selection (obviously), filtering and aggregation raw value selection without field mapping: SELECT col1, col2, col3 = [val1, val2, val3] not hash of { :col1 = val1 ...} API takes into account table schemas 'some_schema.some_table' in a consistent (and working) way; also reflection for this (get schema from table) database reflection: get list of table columns, their database storage types and perhaps adaptor's abstracted types table creation, deletion be able to work with other tables (insert, update) in a loop enumerating selection from another table without requiring to fetch all records from table being enumerated Purpose is to manipulate data with unknown structure at the time of writing code, which is the opposite to object mapping where structure or most of the structure is usually well known. I do not need the object mapping overhead. What are the options, including back-ends for object-mapping libraries?

    Read the article

  • temporary tables within stored procedures on slave servers with readonly set

    - by lau
    Hi, We have set up a replication scheme master/slave and we've had problems lately because some users wrote directly on the slave instead of the master, making the whole setup inconsistent. To prevent these problems from happening again, we've decided to remove the insert, delete, update, etc... rights from the users accessing the slave. Problems is that some stored procedure (for reading) require temporary tables. I read that changing the global variable read_only to true would do what I want and allow the stored procedures to work correctly ( http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_read_only ) but I keep getting the error : The MySQL server is running with the --read-only option so it cannot execute this statement (1290) The stored procedure that I used (for testing purpose) is this one : DELIMITER $$ DROP PROCEDURE IF EXISTS test_readonly $$ CREATE DEFINER=dbuser@% PROCEDURE test_readonly() BEGIN CREATE TEMPORARY TABLE IF NOT EXISTS temp ( BT_INDEX int(11), BT_DESC VARCHAR(10) ); INSERT INTO temp (BT_INDEX, BT_DESC) VALUES (222,'walou'), (111,'bidouille'); DROP TABLE temp; END $$ DELIMITER ; The create temporary table and the drop table work fine with the readonly flag - if I comment the INSERT line, it runs fine- but whenever I want to insert or delete from that temporary table, I get the error message. I use Mysql 5.1.29-rc. My default storage engine is InnoDB. Thanks in advance, this problem is really driving me crazy.

    Read the article

  • What exactly is a reentrant function?

    - by eSKay
    Most of the times, the definition of reentrance is quoted from Wikipedia: A computer program or routine is described as reentrant if it can be safely called again before its previous invocation has been completed (i.e it can be safely executed concurrently). To be reentrant, a computer program or routine: Must hold no static (or global) non-constant data. Must not return the address to static (or global) non-constant data. Must work only on the data provided to it by the caller. Must not rely on locks to singleton resources. Must not modify its own code (unless executing in its own unique thread storage) Must not call non-reentrant computer programs or routines. How is safely defined? If a program can be safely executed concurrently, does it always mean that it is reentrant? What exactly is the common thread between the six points mentioned that I should keep in mind while checking my code for reentrant capabilities? Also, Are all recursive functions reentrant? Are all thread-safe functions reentrant? Are all recursive and thread-safe functions reentrant? While writing this question, one thing comes to mind: Are the terms like reentrance and thread safety absolute at all i.e. do they have fixed concrete definations? For, if they are not, this question is not very meaningful. Thanks!

    Read the article

< Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >